Test Report: Docker_Linux_crio 17907

                    
                      7ea9a0daea14a922bd9e219098252b67b1b782a8:2024-01-08:32610
                    
                

Test fail (5/316)

Order failed test Duration
35 TestAddons/parallel/Ingress 151.92
167 TestIngressAddonLegacy/serial/ValidateIngressAddons 183.09
217 TestMultiNode/serial/PingHostFrom2Pods 4.12
239 TestRunningBinaryUpgrade 73.15
255 TestStoppedBinaryUpgrade/Upgrade 97.15
x
+
TestAddons/parallel/Ingress (151.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-793365 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-793365 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-793365 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e201c669-784a-470b-9b7b-326a922faeb6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
2024/01/08 20:12:30 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:344: "nginx" [e201c669-784a-470b-9b7b-326a922faeb6] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.005127767s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-793365 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-793365 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.734256209s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-793365 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-793365 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-793365 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-793365 addons disable ingress-dns --alsologtostderr -v=1: (1.007113367s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-793365 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-793365 addons disable ingress --alsologtostderr -v=1: (7.712593569s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-793365
helpers_test.go:235: (dbg) docker inspect addons-793365:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c121bc6424fe94323111d718fcbdd9396d67f2274847d4271bd42c984bcb0af2",
	        "Created": "2024-01-08T20:10:14.952897155Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 19509,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T20:10:15.315832923Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b531f61d9bfacb74e25e3fb20a2533b78fa4bf98ca9061755006074ff8c2c789",
	        "ResolvConfPath": "/var/lib/docker/containers/c121bc6424fe94323111d718fcbdd9396d67f2274847d4271bd42c984bcb0af2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c121bc6424fe94323111d718fcbdd9396d67f2274847d4271bd42c984bcb0af2/hostname",
	        "HostsPath": "/var/lib/docker/containers/c121bc6424fe94323111d718fcbdd9396d67f2274847d4271bd42c984bcb0af2/hosts",
	        "LogPath": "/var/lib/docker/containers/c121bc6424fe94323111d718fcbdd9396d67f2274847d4271bd42c984bcb0af2/c121bc6424fe94323111d718fcbdd9396d67f2274847d4271bd42c984bcb0af2-json.log",
	        "Name": "/addons-793365",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-793365:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-793365",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/15fb70e7bf452d035ebbb718e6134604fee8b01b8d0096b88d3c77bd5c6cf181-init/diff:/var/lib/docker/overlay2/2fffc6399525ec20cf4113360863206b9b39bff791b2620dc189d266ef6bfe67/diff",
	                "MergedDir": "/var/lib/docker/overlay2/15fb70e7bf452d035ebbb718e6134604fee8b01b8d0096b88d3c77bd5c6cf181/merged",
	                "UpperDir": "/var/lib/docker/overlay2/15fb70e7bf452d035ebbb718e6134604fee8b01b8d0096b88d3c77bd5c6cf181/diff",
	                "WorkDir": "/var/lib/docker/overlay2/15fb70e7bf452d035ebbb718e6134604fee8b01b8d0096b88d3c77bd5c6cf181/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-793365",
	                "Source": "/var/lib/docker/volumes/addons-793365/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-793365",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-793365",
	                "name.minikube.sigs.k8s.io": "addons-793365",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "86e5a2e0d9bdb028a7d8e9a2e9b623cca6ac47a3a901453f6684a1f3ca0d4ee6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/86e5a2e0d9bd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-793365": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c121bc6424fe",
	                        "addons-793365"
	                    ],
	                    "NetworkID": "cccbcc6dff63af1af5203b7ceed96651ea6249541c9591286a1630c7a9d71980",
	                    "EndpointID": "1ff73a54dbc996570b301ed2b5dbdb0aff969c5bb248683b12318435106a2b7a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-793365 -n addons-793365
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-793365 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-793365 logs -n 25: (1.367268904s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-529405                                                                     | download-only-529405   | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC | 08 Jan 24 20:09 UTC |
	| delete  | -p download-only-529405                                                                     | download-only-529405   | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC | 08 Jan 24 20:09 UTC |
	| start   | --download-only -p                                                                          | download-docker-615567 | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC |                     |
	|         | download-docker-615567                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-615567                                                                   | download-docker-615567 | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC | 08 Jan 24 20:09 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-930872   | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC |                     |
	|         | binary-mirror-930872                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36653                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-930872                                                                     | binary-mirror-930872   | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC | 08 Jan 24 20:09 UTC |
	| addons  | disable dashboard -p                                                                        | addons-793365          | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC |                     |
	|         | addons-793365                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-793365          | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC |                     |
	|         | addons-793365                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-793365 --wait=true                                                                | addons-793365          | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC | 08 Jan 24 20:12 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-793365 ssh cat                                                                       | addons-793365          | jenkins | v1.32.0 | 08 Jan 24 20:12 UTC | 08 Jan 24 20:12 UTC |
	|         | /opt/local-path-provisioner/pvc-5e67be60-a644-4a8f-a6a7-599f5a45b0d3_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-793365 addons disable                                                                | addons-793365          | jenkins | v1.32.0 | 08 Jan 24 20:12 UTC | 08 Jan 24 20:13 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-793365 addons disable                                                                | addons-793365          | jenkins | v1.32.0 | 08 Jan 24 20:12 UTC | 08 Jan 24 20:12 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-793365 ip                                                                            | addons-793365          | jenkins | v1.32.0 | 08 Jan 24 20:12 UTC | 08 Jan 24 20:12 UTC |
	| addons  | addons-793365 addons disable                                                                | addons-793365          | jenkins | v1.32.0 | 08 Jan 24 20:12 UTC | 08 Jan 24 20:12 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-793365 addons                                                                        | addons-793365          | jenkins | v1.32.0 | 08 Jan 24 20:12 UTC | 08 Jan 24 20:12 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-793365 ssh curl -s                                                                   | addons-793365          | jenkins | v1.32.0 | 08 Jan 24 20:12 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-793365          | jenkins | v1.32.0 | 08 Jan 24 20:12 UTC | 08 Jan 24 20:12 UTC |
	|         | addons-793365                                                                               |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-793365          | jenkins | v1.32.0 | 08 Jan 24 20:12 UTC | 08 Jan 24 20:12 UTC |
	|         | -p addons-793365                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-793365          | jenkins | v1.32.0 | 08 Jan 24 20:13 UTC | 08 Jan 24 20:13 UTC |
	|         | addons-793365                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-793365          | jenkins | v1.32.0 | 08 Jan 24 20:13 UTC | 08 Jan 24 20:13 UTC |
	|         | -p addons-793365                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-793365 addons                                                                        | addons-793365          | jenkins | v1.32.0 | 08 Jan 24 20:13 UTC | 08 Jan 24 20:13 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-793365 addons                                                                        | addons-793365          | jenkins | v1.32.0 | 08 Jan 24 20:13 UTC | 08 Jan 24 20:13 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-793365 ip                                                                            | addons-793365          | jenkins | v1.32.0 | 08 Jan 24 20:14 UTC | 08 Jan 24 20:14 UTC |
	| addons  | addons-793365 addons disable                                                                | addons-793365          | jenkins | v1.32.0 | 08 Jan 24 20:14 UTC | 08 Jan 24 20:14 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-793365 addons disable                                                                | addons-793365          | jenkins | v1.32.0 | 08 Jan 24 20:14 UTC | 08 Jan 24 20:14 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:09:50
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:09:50.729370   18837 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:09:50.729664   18837 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:09:50.729673   18837 out.go:309] Setting ErrFile to fd 2...
	I0108 20:09:50.729678   18837 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:09:50.729885   18837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-11003/.minikube/bin
	I0108 20:09:50.730639   18837 out.go:303] Setting JSON to false
	I0108 20:09:50.731655   18837 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3117,"bootTime":1704741474,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:09:50.731722   18837 start.go:138] virtualization: kvm guest
	I0108 20:09:50.734599   18837 out.go:177] * [addons-793365] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:09:50.736614   18837 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:09:50.738370   18837 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:09:50.736651   18837 notify.go:220] Checking for updates...
	I0108 20:09:50.741438   18837 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-11003/kubeconfig
	I0108 20:09:50.743055   18837 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-11003/.minikube
	I0108 20:09:50.745081   18837 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 20:09:50.747014   18837 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:09:50.749228   18837 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:09:50.775645   18837 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:09:50.775837   18837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:09:50.836782   18837 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:41 SystemTime:2024-01-08 20:09:50.826010005 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:09:50.836904   18837 docker.go:295] overlay module found
	I0108 20:09:50.839008   18837 out.go:177] * Using the docker driver based on user configuration
	I0108 20:09:50.840451   18837 start.go:298] selected driver: docker
	I0108 20:09:50.840471   18837 start.go:902] validating driver "docker" against <nil>
	I0108 20:09:50.840485   18837 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:09:50.841402   18837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:09:50.900981   18837 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:41 SystemTime:2024-01-08 20:09:50.890892122 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:09:50.901146   18837 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 20:09:50.901357   18837 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 20:09:50.903715   18837 out.go:177] * Using Docker driver with root privileges
	I0108 20:09:50.905127   18837 cni.go:84] Creating CNI manager for ""
	I0108 20:09:50.905158   18837 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 20:09:50.905170   18837 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 20:09:50.905179   18837 start_flags.go:323] config:
	{Name:addons-793365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-793365 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:09:50.906903   18837 out.go:177] * Starting control plane node addons-793365 in cluster addons-793365
	I0108 20:09:50.908411   18837 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 20:09:50.909869   18837 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:09:50.911493   18837 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:09:50.911576   18837 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17907-11003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 20:09:50.911592   18837 cache.go:56] Caching tarball of preloaded images
	I0108 20:09:50.911606   18837 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0108 20:09:50.911711   18837 preload.go:174] Found /home/jenkins/minikube-integration/17907-11003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 20:09:50.911728   18837 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 20:09:50.912157   18837 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/config.json ...
	I0108 20:09:50.912186   18837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/config.json: {Name:mk5364308531819de84399de4ddf11bde55098d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:09:50.930331   18837 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0108 20:09:50.930493   18837 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I0108 20:09:50.930521   18837 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory, skipping pull
	I0108 20:09:50.930529   18837 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in cache, skipping pull
	I0108 20:09:50.930538   18837 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I0108 20:09:50.930545   18837 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c from local cache
	I0108 20:10:03.351251   18837 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c from cached tarball
	I0108 20:10:03.351289   18837 cache.go:194] Successfully downloaded all kic artifacts
	I0108 20:10:03.351327   18837 start.go:365] acquiring machines lock for addons-793365: {Name:mk12ff31f37fd56ad07246db7e533dacad9c43f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:10:03.351439   18837 start.go:369] acquired machines lock for "addons-793365" in 93.276µs
	I0108 20:10:03.351466   18837 start.go:93] Provisioning new machine with config: &{Name:addons-793365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-793365 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 20:10:03.351552   18837 start.go:125] createHost starting for "" (driver="docker")
	I0108 20:10:03.353828   18837 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0108 20:10:03.354180   18837 start.go:159] libmachine.API.Create for "addons-793365" (driver="docker")
	I0108 20:10:03.354221   18837 client.go:168] LocalClient.Create starting
	I0108 20:10:03.354365   18837 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem
	I0108 20:10:03.989381   18837 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/cert.pem
	I0108 20:10:04.161880   18837 cli_runner.go:164] Run: docker network inspect addons-793365 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 20:10:04.178550   18837 cli_runner.go:211] docker network inspect addons-793365 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 20:10:04.178636   18837 network_create.go:281] running [docker network inspect addons-793365] to gather additional debugging logs...
	I0108 20:10:04.178657   18837 cli_runner.go:164] Run: docker network inspect addons-793365
	W0108 20:10:04.197153   18837 cli_runner.go:211] docker network inspect addons-793365 returned with exit code 1
	I0108 20:10:04.197194   18837 network_create.go:284] error running [docker network inspect addons-793365]: docker network inspect addons-793365: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-793365 not found
	I0108 20:10:04.197210   18837 network_create.go:286] output of [docker network inspect addons-793365]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-793365 not found
	
	** /stderr **
	I0108 20:10:04.197360   18837 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:10:04.215018   18837 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002664bd0}
	I0108 20:10:04.215075   18837 network_create.go:124] attempt to create docker network addons-793365 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0108 20:10:04.215152   18837 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-793365 addons-793365
	I0108 20:10:04.275129   18837 network_create.go:108] docker network addons-793365 192.168.49.0/24 created
	I0108 20:10:04.275161   18837 kic.go:121] calculated static IP "192.168.49.2" for the "addons-793365" container
	I0108 20:10:04.275239   18837 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 20:10:04.294148   18837 cli_runner.go:164] Run: docker volume create addons-793365 --label name.minikube.sigs.k8s.io=addons-793365 --label created_by.minikube.sigs.k8s.io=true
	I0108 20:10:04.314861   18837 oci.go:103] Successfully created a docker volume addons-793365
	I0108 20:10:04.314925   18837 cli_runner.go:164] Run: docker run --rm --name addons-793365-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-793365 --entrypoint /usr/bin/test -v addons-793365:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I0108 20:10:09.271263   18837 cli_runner.go:217] Completed: docker run --rm --name addons-793365-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-793365 --entrypoint /usr/bin/test -v addons-793365:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib: (4.95630443s)
	I0108 20:10:09.271292   18837 oci.go:107] Successfully prepared a docker volume addons-793365
	I0108 20:10:09.271325   18837 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:10:09.271356   18837 kic.go:194] Starting extracting preloaded images to volume ...
	I0108 20:10:09.271431   18837 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17907-11003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-793365:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 20:10:14.872872   18837 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17907-11003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-793365:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (5.601384658s)
	I0108 20:10:14.872914   18837 kic.go:203] duration metric: took 5.601565 seconds to extract preloaded images to volume
	W0108 20:10:14.873078   18837 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 20:10:14.873187   18837 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 20:10:14.937539   18837 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-793365 --name addons-793365 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-793365 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-793365 --network addons-793365 --ip 192.168.49.2 --volume addons-793365:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0108 20:10:15.326811   18837 cli_runner.go:164] Run: docker container inspect addons-793365 --format={{.State.Running}}
	I0108 20:10:15.348712   18837 cli_runner.go:164] Run: docker container inspect addons-793365 --format={{.State.Status}}
	I0108 20:10:15.369931   18837 cli_runner.go:164] Run: docker exec addons-793365 stat /var/lib/dpkg/alternatives/iptables
	I0108 20:10:15.465150   18837 oci.go:144] the created container "addons-793365" has a running status.
	I0108 20:10:15.465185   18837 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17907-11003/.minikube/machines/addons-793365/id_rsa...
	I0108 20:10:15.556248   18837 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17907-11003/.minikube/machines/addons-793365/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 20:10:15.580527   18837 cli_runner.go:164] Run: docker container inspect addons-793365 --format={{.State.Status}}
	I0108 20:10:15.599265   18837 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 20:10:15.599301   18837 kic_runner.go:114] Args: [docker exec --privileged addons-793365 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 20:10:15.678891   18837 cli_runner.go:164] Run: docker container inspect addons-793365 --format={{.State.Status}}
	I0108 20:10:15.699003   18837 machine.go:88] provisioning docker machine ...
	I0108 20:10:15.699042   18837 ubuntu.go:169] provisioning hostname "addons-793365"
	I0108 20:10:15.699098   18837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-793365
	I0108 20:10:15.721418   18837 main.go:141] libmachine: Using SSH client type: native
	I0108 20:10:15.722048   18837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0108 20:10:15.722079   18837 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-793365 && echo "addons-793365" | sudo tee /etc/hostname
	I0108 20:10:15.723970   18837 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33124->127.0.0.1:32772: read: connection reset by peer
	I0108 20:10:18.860461   18837 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-793365
	
	I0108 20:10:18.860561   18837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-793365
	I0108 20:10:18.881477   18837 main.go:141] libmachine: Using SSH client type: native
	I0108 20:10:18.881906   18837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0108 20:10:18.881936   18837 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-793365' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-793365/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-793365' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:10:19.004194   18837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:10:19.004241   18837 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17907-11003/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-11003/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-11003/.minikube}
	I0108 20:10:19.004272   18837 ubuntu.go:177] setting up certificates
	I0108 20:10:19.004286   18837 provision.go:83] configureAuth start
	I0108 20:10:19.004363   18837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-793365
	I0108 20:10:19.025298   18837 provision.go:138] copyHostCerts
	I0108 20:10:19.025382   18837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-11003/.minikube/key.pem (1679 bytes)
	I0108 20:10:19.025497   18837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-11003/.minikube/ca.pem (1078 bytes)
	I0108 20:10:19.025569   18837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-11003/.minikube/cert.pem (1123 bytes)
	I0108 20:10:19.025630   18837 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-11003/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca-key.pem org=jenkins.addons-793365 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-793365]
	I0108 20:10:19.116685   18837 provision.go:172] copyRemoteCerts
	I0108 20:10:19.116758   18837 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:10:19.116789   18837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-793365
	I0108 20:10:19.139154   18837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/addons-793365/id_rsa Username:docker}
	I0108 20:10:19.228985   18837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 20:10:19.253504   18837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0108 20:10:19.277433   18837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 20:10:19.301143   18837 provision.go:86] duration metric: configureAuth took 296.844188ms
	I0108 20:10:19.301165   18837 ubuntu.go:193] setting minikube options for container-runtime
	I0108 20:10:19.301340   18837 config.go:182] Loaded profile config "addons-793365": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:10:19.301475   18837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-793365
	I0108 20:10:19.320619   18837 main.go:141] libmachine: Using SSH client type: native
	I0108 20:10:19.321003   18837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0108 20:10:19.321036   18837 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 20:10:19.542550   18837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 20:10:19.542583   18837 machine.go:91] provisioned docker machine in 3.843556874s
	I0108 20:10:19.542594   18837 client.go:171] LocalClient.Create took 16.188361312s
	I0108 20:10:19.542615   18837 start.go:167] duration metric: libmachine.API.Create for "addons-793365" took 16.188436452s
	I0108 20:10:19.542622   18837 start.go:300] post-start starting for "addons-793365" (driver="docker")
	I0108 20:10:19.542635   18837 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:10:19.542695   18837 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:10:19.542735   18837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-793365
	I0108 20:10:19.563397   18837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/addons-793365/id_rsa Username:docker}
	I0108 20:10:19.658199   18837 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:10:19.661600   18837 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 20:10:19.661651   18837 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 20:10:19.661663   18837 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 20:10:19.661670   18837 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 20:10:19.661683   18837 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-11003/.minikube/addons for local assets ...
	I0108 20:10:19.661739   18837 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-11003/.minikube/files for local assets ...
	I0108 20:10:19.661761   18837 start.go:303] post-start completed in 119.131973ms
	I0108 20:10:19.662023   18837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-793365
	I0108 20:10:19.680506   18837 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/config.json ...
	I0108 20:10:19.680987   18837 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:10:19.681096   18837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-793365
	I0108 20:10:19.699112   18837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/addons-793365/id_rsa Username:docker}
	I0108 20:10:19.785095   18837 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 20:10:19.789908   18837 start.go:128] duration metric: createHost completed in 16.438333562s
	I0108 20:10:19.789942   18837 start.go:83] releasing machines lock for "addons-793365", held for 16.438490838s
	I0108 20:10:19.790030   18837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-793365
	I0108 20:10:19.807228   18837 ssh_runner.go:195] Run: cat /version.json
	I0108 20:10:19.807301   18837 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:10:19.807310   18837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-793365
	I0108 20:10:19.807387   18837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-793365
	I0108 20:10:19.826552   18837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/addons-793365/id_rsa Username:docker}
	I0108 20:10:19.827332   18837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/addons-793365/id_rsa Username:docker}
	I0108 20:10:20.008634   18837 ssh_runner.go:195] Run: systemctl --version
	I0108 20:10:20.012881   18837 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 20:10:20.154003   18837 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 20:10:20.158826   18837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:10:20.179904   18837 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 20:10:20.179993   18837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:10:20.210213   18837 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0108 20:10:20.210235   18837 start.go:475] detecting cgroup driver to use...
	I0108 20:10:20.210266   18837 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 20:10:20.210304   18837 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 20:10:20.225780   18837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 20:10:20.237390   18837 docker.go:217] disabling cri-docker service (if available) ...
	I0108 20:10:20.237449   18837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 20:10:20.252236   18837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 20:10:20.268230   18837 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 20:10:20.356518   18837 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 20:10:20.439870   18837 docker.go:233] disabling docker service ...
	I0108 20:10:20.439923   18837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:10:20.460443   18837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:10:20.473329   18837 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:10:20.556408   18837 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:10:20.643884   18837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:10:20.657095   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:10:20.673077   18837 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 20:10:20.673153   18837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:10:20.683384   18837 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 20:10:20.683463   18837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:10:20.693615   18837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:10:20.704388   18837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:10:20.714798   18837 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 20:10:20.724729   18837 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 20:10:20.736034   18837 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 20:10:20.743871   18837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:10:20.818840   18837 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 20:10:20.933864   18837 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 20:10:20.933993   18837 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 20:10:20.937355   18837 start.go:543] Will wait 60s for crictl version
	I0108 20:10:20.937405   18837 ssh_runner.go:195] Run: which crictl
	I0108 20:10:20.941285   18837 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 20:10:20.977919   18837 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0108 20:10:20.978027   18837 ssh_runner.go:195] Run: crio --version
	I0108 20:10:21.016372   18837 ssh_runner.go:195] Run: crio --version
	I0108 20:10:21.056052   18837 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0108 20:10:21.057591   18837 cli_runner.go:164] Run: docker network inspect addons-793365 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:10:21.076092   18837 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0108 20:10:21.080008   18837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:10:21.090934   18837 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:10:21.091013   18837 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:10:21.149479   18837 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 20:10:21.149506   18837 crio.go:415] Images already preloaded, skipping extraction
	I0108 20:10:21.149564   18837 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:10:21.184125   18837 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 20:10:21.184144   18837 cache_images.go:84] Images are preloaded, skipping loading
	I0108 20:10:21.184193   18837 ssh_runner.go:195] Run: crio config
	I0108 20:10:21.228763   18837 cni.go:84] Creating CNI manager for ""
	I0108 20:10:21.228795   18837 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 20:10:21.228829   18837 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 20:10:21.228859   18837 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-793365 NodeName:addons-793365 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 20:10:21.229075   18837 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-793365"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 20:10:21.229196   18837 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-793365 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-793365 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 20:10:21.229278   18837 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 20:10:21.239280   18837 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 20:10:21.239408   18837 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 20:10:21.248891   18837 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0108 20:10:21.266222   18837 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 20:10:21.284162   18837 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0108 20:10:21.301398   18837 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0108 20:10:21.304864   18837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:10:21.316401   18837 certs.go:56] Setting up /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365 for IP: 192.168.49.2
	I0108 20:10:21.316446   18837 certs.go:190] acquiring lock for shared ca certs: {Name:mk77871b3b3f5891ac4ba9a63281bc46e0e62e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:21.316614   18837 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17907-11003/.minikube/ca.key
	I0108 20:10:21.376660   18837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-11003/.minikube/ca.crt ...
	I0108 20:10:21.376702   18837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/.minikube/ca.crt: {Name:mkbffc3800e846049b243c661afe1405487dd893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:21.376941   18837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-11003/.minikube/ca.key ...
	I0108 20:10:21.376953   18837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/.minikube/ca.key: {Name:mk18820b1b685dde82c2eabaf4b7cab72543908f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:21.377041   18837 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17907-11003/.minikube/proxy-client-ca.key
	I0108 20:10:21.485844   18837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-11003/.minikube/proxy-client-ca.crt ...
	I0108 20:10:21.485895   18837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/.minikube/proxy-client-ca.crt: {Name:mkefad6204b61ceba838fa820f8defa9a397ada7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:21.486141   18837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-11003/.minikube/proxy-client-ca.key ...
	I0108 20:10:21.486153   18837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/.minikube/proxy-client-ca.key: {Name:mk3001f4ac38abc800ab468cb0af0f66af64bc86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:21.486266   18837 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.key
	I0108 20:10:21.486281   18837 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt with IP's: []
	I0108 20:10:21.537833   18837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt ...
	I0108 20:10:21.537866   18837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: {Name:mk61f7a82285e73e572002b11fa8db9c0c2c19de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:21.538037   18837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.key ...
	I0108 20:10:21.538048   18837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.key: {Name:mk29f819fda468c532d994ac34f161155cae07b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:21.538119   18837 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/apiserver.key.dd3b5fb2
	I0108 20:10:21.538135   18837 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 20:10:21.594487   18837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/apiserver.crt.dd3b5fb2 ...
	I0108 20:10:21.594537   18837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/apiserver.crt.dd3b5fb2: {Name:mk28859128386d4cd9f29687411862785259a666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:21.594772   18837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/apiserver.key.dd3b5fb2 ...
	I0108 20:10:21.594788   18837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/apiserver.key.dd3b5fb2: {Name:mk03184949b520d9b8052de786555eefb0a8e03f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:21.594877   18837 certs.go:337] copying /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/apiserver.crt
	I0108 20:10:21.594961   18837 certs.go:341] copying /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/apiserver.key
	I0108 20:10:21.595019   18837 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/proxy-client.key
	I0108 20:10:21.595040   18837 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/proxy-client.crt with IP's: []
	I0108 20:10:21.676263   18837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/proxy-client.crt ...
	I0108 20:10:21.676307   18837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/proxy-client.crt: {Name:mk82e28b49a50e4bfdd7fa6b576f3e56760a304b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:21.676517   18837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/proxy-client.key ...
	I0108 20:10:21.676533   18837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/proxy-client.key: {Name:mkecbbe5ddc484f5be76b1fb5d647890e1e1d44e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:21.676713   18837 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 20:10:21.676750   18837 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem (1078 bytes)
	I0108 20:10:21.676776   18837 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/home/jenkins/minikube-integration/17907-11003/.minikube/certs/cert.pem (1123 bytes)
	I0108 20:10:21.676801   18837 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/home/jenkins/minikube-integration/17907-11003/.minikube/certs/key.pem (1679 bytes)
	I0108 20:10:21.677513   18837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 20:10:21.703475   18837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 20:10:21.728734   18837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 20:10:21.753383   18837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 20:10:21.778097   18837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 20:10:21.801875   18837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 20:10:21.825149   18837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 20:10:21.849153   18837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 20:10:21.873786   18837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 20:10:21.896641   18837 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 20:10:21.914964   18837 ssh_runner.go:195] Run: openssl version
	I0108 20:10:21.920617   18837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 20:10:21.930782   18837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:10:21.934791   18837 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:10:21.934856   18837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:10:21.942273   18837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 20:10:21.952277   18837 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 20:10:21.956272   18837 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 20:10:21.956329   18837 kubeadm.go:404] StartCluster: {Name:addons-793365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-793365 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:10:21.956406   18837 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 20:10:21.956463   18837 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 20:10:21.992784   18837 cri.go:89] found id: ""
	I0108 20:10:21.992868   18837 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 20:10:22.002099   18837 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 20:10:22.010746   18837 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 20:10:22.010806   18837 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 20:10:22.019956   18837 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 20:10:22.020020   18837 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 20:10:22.109505   18837 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I0108 20:10:22.177990   18837 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 20:10:31.716579   18837 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 20:10:31.716648   18837 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 20:10:31.716786   18837 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0108 20:10:31.716886   18837 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I0108 20:10:31.716934   18837 kubeadm.go:322] OS: Linux
	I0108 20:10:31.717024   18837 kubeadm.go:322] CGROUPS_CPU: enabled
	I0108 20:10:31.717101   18837 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0108 20:10:31.717167   18837 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0108 20:10:31.717238   18837 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0108 20:10:31.717305   18837 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0108 20:10:31.717385   18837 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0108 20:10:31.717457   18837 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0108 20:10:31.717539   18837 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0108 20:10:31.717622   18837 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0108 20:10:31.717730   18837 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 20:10:31.717869   18837 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 20:10:31.718000   18837 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 20:10:31.718098   18837 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 20:10:31.719999   18837 out.go:204]   - Generating certificates and keys ...
	I0108 20:10:31.720170   18837 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 20:10:31.720271   18837 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 20:10:31.720358   18837 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 20:10:31.720430   18837 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 20:10:31.720504   18837 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 20:10:31.720564   18837 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 20:10:31.720632   18837 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 20:10:31.720762   18837 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-793365 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 20:10:31.720825   18837 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 20:10:31.720941   18837 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-793365 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 20:10:31.721037   18837 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 20:10:31.721104   18837 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 20:10:31.721152   18837 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 20:10:31.721227   18837 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 20:10:31.721285   18837 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 20:10:31.721347   18837 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 20:10:31.721431   18837 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 20:10:31.721494   18837 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 20:10:31.721590   18837 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 20:10:31.721673   18837 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 20:10:31.723432   18837 out.go:204]   - Booting up control plane ...
	I0108 20:10:31.723566   18837 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 20:10:31.723672   18837 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 20:10:31.723785   18837 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 20:10:31.723927   18837 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 20:10:31.724075   18837 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 20:10:31.724146   18837 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 20:10:31.724340   18837 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 20:10:31.724433   18837 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.501838 seconds
	I0108 20:10:31.724553   18837 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 20:10:31.724731   18837 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 20:10:31.724852   18837 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 20:10:31.725107   18837 kubeadm.go:322] [mark-control-plane] Marking the node addons-793365 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 20:10:31.725189   18837 kubeadm.go:322] [bootstrap-token] Using token: 1t81u3.54j6qgotvq53j4fs
	I0108 20:10:31.726767   18837 out.go:204]   - Configuring RBAC rules ...
	I0108 20:10:31.726902   18837 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 20:10:31.727005   18837 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 20:10:31.727223   18837 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 20:10:31.727449   18837 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 20:10:31.727582   18837 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 20:10:31.727699   18837 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 20:10:31.727824   18837 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 20:10:31.727902   18837 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 20:10:31.727980   18837 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 20:10:31.727993   18837 kubeadm.go:322] 
	I0108 20:10:31.728083   18837 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 20:10:31.728106   18837 kubeadm.go:322] 
	I0108 20:10:31.728214   18837 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 20:10:31.728225   18837 kubeadm.go:322] 
	I0108 20:10:31.728259   18837 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 20:10:31.728342   18837 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 20:10:31.728427   18837 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 20:10:31.728437   18837 kubeadm.go:322] 
	I0108 20:10:31.728536   18837 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 20:10:31.728547   18837 kubeadm.go:322] 
	I0108 20:10:31.728613   18837 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 20:10:31.728626   18837 kubeadm.go:322] 
	I0108 20:10:31.728669   18837 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 20:10:31.728747   18837 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 20:10:31.728844   18837 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 20:10:31.728854   18837 kubeadm.go:322] 
	I0108 20:10:31.728978   18837 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 20:10:31.729053   18837 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 20:10:31.729059   18837 kubeadm.go:322] 
	I0108 20:10:31.729124   18837 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 1t81u3.54j6qgotvq53j4fs \
	I0108 20:10:31.729208   18837 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5f0d3868e129d146f2f118c1d4d93dd4eee494642df3f8db5a7e17a4b1fd36d7 \
	I0108 20:10:31.729228   18837 kubeadm.go:322] 	--control-plane 
	I0108 20:10:31.729234   18837 kubeadm.go:322] 
	I0108 20:10:31.729300   18837 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 20:10:31.729307   18837 kubeadm.go:322] 
	I0108 20:10:31.729372   18837 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1t81u3.54j6qgotvq53j4fs \
	I0108 20:10:31.729488   18837 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5f0d3868e129d146f2f118c1d4d93dd4eee494642df3f8db5a7e17a4b1fd36d7 
	I0108 20:10:31.729507   18837 cni.go:84] Creating CNI manager for ""
	I0108 20:10:31.729516   18837 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 20:10:31.731594   18837 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 20:10:31.733339   18837 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 20:10:31.793561   18837 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 20:10:31.793589   18837 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 20:10:31.813756   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 20:10:32.670868   18837 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 20:10:32.670955   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:10:32.670966   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28 minikube.k8s.io/name=addons-793365 minikube.k8s.io/updated_at=2024_01_08T20_10_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:10:32.680983   18837 ops.go:34] apiserver oom_adj: -16
	I0108 20:10:32.817462   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:10:33.318171   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:10:33.817657   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:10:34.317589   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:10:34.818041   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:10:35.318295   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:10:35.817981   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:10:36.318195   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:10:36.817733   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:10:37.318322   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:10:37.817825   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:10:38.318477   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:10:38.817950   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:10:39.318196   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:10:39.817899   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:10:40.318332   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:10:40.817998   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:10:41.317862   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:10:41.817935   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:10:42.318344   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:10:42.817916   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:10:43.318167   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:10:43.817859   18837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:10:43.886713   18837 kubeadm.go:1088] duration metric: took 11.215833942s to wait for elevateKubeSystemPrivileges.
	I0108 20:10:43.886763   18837 kubeadm.go:406] StartCluster complete in 21.930436141s
	I0108 20:10:43.886792   18837 settings.go:142] acquiring lock: {Name:mk2f02a606763d8db203f5ac009c4f8430c5c61d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:43.886941   18837 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17907-11003/kubeconfig
	I0108 20:10:43.887391   18837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/kubeconfig: {Name:mkc68e8b275b7f7ddea94f238057103f0099d605 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:43.887762   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 20:10:43.887769   18837 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0108 20:10:43.887874   18837 addons.go:69] Setting cloud-spanner=true in profile "addons-793365"
	I0108 20:10:43.887893   18837 addons.go:69] Setting yakd=true in profile "addons-793365"
	I0108 20:10:43.887902   18837 addons.go:237] Setting addon cloud-spanner=true in "addons-793365"
	I0108 20:10:43.887894   18837 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-793365"
	I0108 20:10:43.887915   18837 addons.go:237] Setting addon yakd=true in "addons-793365"
	I0108 20:10:43.887931   18837 addons.go:69] Setting inspektor-gadget=true in profile "addons-793365"
	I0108 20:10:43.887966   18837 addons.go:237] Setting addon csi-hostpath-driver=true in "addons-793365"
	I0108 20:10:43.887974   18837 addons.go:237] Setting addon inspektor-gadget=true in "addons-793365"
	I0108 20:10:43.887982   18837 host.go:66] Checking if "addons-793365" exists ...
	I0108 20:10:43.887998   18837 addons.go:69] Setting registry=true in profile "addons-793365"
	I0108 20:10:43.888016   18837 host.go:66] Checking if "addons-793365" exists ...
	I0108 20:10:43.888017   18837 host.go:66] Checking if "addons-793365" exists ...
	I0108 20:10:43.888018   18837 config.go:182] Loaded profile config "addons-793365": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:10:43.888032   18837 addons.go:69] Setting metrics-server=true in profile "addons-793365"
	I0108 20:10:43.888047   18837 addons.go:237] Setting addon metrics-server=true in "addons-793365"
	I0108 20:10:43.888022   18837 addons.go:237] Setting addon registry=true in "addons-793365"
	I0108 20:10:43.888082   18837 host.go:66] Checking if "addons-793365" exists ...
	I0108 20:10:43.888132   18837 host.go:66] Checking if "addons-793365" exists ...
	I0108 20:10:43.888164   18837 addons.go:69] Setting default-storageclass=true in profile "addons-793365"
	I0108 20:10:43.888182   18837 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-793365"
	I0108 20:10:43.888333   18837 addons.go:69] Setting helm-tiller=true in profile "addons-793365"
	I0108 20:10:43.888356   18837 addons.go:237] Setting addon helm-tiller=true in "addons-793365"
	I0108 20:10:43.888423   18837 host.go:66] Checking if "addons-793365" exists ...
	I0108 20:10:43.888468   18837 cli_runner.go:164] Run: docker container inspect addons-793365 --format={{.State.Status}}
	I0108 20:10:43.888557   18837 cli_runner.go:164] Run: docker container inspect addons-793365 --format={{.State.Status}}
	I0108 20:10:43.888572   18837 cli_runner.go:164] Run: docker container inspect addons-793365 --format={{.State.Status}}
	I0108 20:10:43.888628   18837 cli_runner.go:164] Run: docker container inspect addons-793365 --format={{.State.Status}}
	I0108 20:10:43.888637   18837 cli_runner.go:164] Run: docker container inspect addons-793365 --format={{.State.Status}}
	I0108 20:10:43.888649   18837 addons.go:69] Setting gcp-auth=true in profile "addons-793365"
	I0108 20:10:43.888683   18837 mustload.go:65] Loading cluster: addons-793365
	I0108 20:10:43.888834   18837 cli_runner.go:164] Run: docker container inspect addons-793365 --format={{.State.Status}}
	I0108 20:10:43.888921   18837 config.go:182] Loaded profile config "addons-793365": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:10:43.889184   18837 cli_runner.go:164] Run: docker container inspect addons-793365 --format={{.State.Status}}
	I0108 20:10:43.892841   18837 cli_runner.go:164] Run: docker container inspect addons-793365 --format={{.State.Status}}
	I0108 20:10:43.892947   18837 addons.go:69] Setting storage-provisioner=true in profile "addons-793365"
	I0108 20:10:43.893469   18837 addons.go:237] Setting addon storage-provisioner=true in "addons-793365"
	I0108 20:10:43.893562   18837 host.go:66] Checking if "addons-793365" exists ...
	I0108 20:10:43.892963   18837 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-793365"
	I0108 20:10:43.893667   18837 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-793365"
	I0108 20:10:43.893985   18837 cli_runner.go:164] Run: docker container inspect addons-793365 --format={{.State.Status}}
	I0108 20:10:43.894095   18837 cli_runner.go:164] Run: docker container inspect addons-793365 --format={{.State.Status}}
	I0108 20:10:43.892988   18837 addons.go:69] Setting volumesnapshots=true in profile "addons-793365"
	I0108 20:10:43.894513   18837 addons.go:237] Setting addon volumesnapshots=true in "addons-793365"
	I0108 20:10:43.894636   18837 host.go:66] Checking if "addons-793365" exists ...
	I0108 20:10:43.895839   18837 cli_runner.go:164] Run: docker container inspect addons-793365 --format={{.State.Status}}
	I0108 20:10:43.887982   18837 host.go:66] Checking if "addons-793365" exists ...
	I0108 20:10:43.893053   18837 addons.go:69] Setting ingress-dns=true in profile "addons-793365"
	I0108 20:10:43.893065   18837 addons.go:69] Setting ingress=true in profile "addons-793365"
	I0108 20:10:43.893167   18837 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-793365"
	I0108 20:10:43.899591   18837 addons.go:237] Setting addon nvidia-device-plugin=true in "addons-793365"
	I0108 20:10:43.899710   18837 host.go:66] Checking if "addons-793365" exists ...
	I0108 20:10:43.900320   18837 cli_runner.go:164] Run: docker container inspect addons-793365 --format={{.State.Status}}
	I0108 20:10:43.906885   18837 addons.go:237] Setting addon ingress-dns=true in "addons-793365"
	I0108 20:10:43.907003   18837 host.go:66] Checking if "addons-793365" exists ...
	I0108 20:10:43.907420   18837 cli_runner.go:164] Run: docker container inspect addons-793365 --format={{.State.Status}}
	I0108 20:10:43.908131   18837 addons.go:237] Setting addon ingress=true in "addons-793365"
	I0108 20:10:43.908296   18837 host.go:66] Checking if "addons-793365" exists ...
	I0108 20:10:43.908928   18837 cli_runner.go:164] Run: docker container inspect addons-793365 --format={{.State.Status}}
	I0108 20:10:43.935929   18837 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0108 20:10:43.934068   18837 host.go:66] Checking if "addons-793365" exists ...
	I0108 20:10:43.940919   18837 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0108 20:10:43.937683   18837 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0108 20:10:43.940227   18837 cli_runner.go:164] Run: docker container inspect addons-793365 --format={{.State.Status}}
	I0108 20:10:43.942696   18837 addons.go:429] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0108 20:10:43.942811   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0108 20:10:43.942885   18837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-793365
	I0108 20:10:43.944661   18837 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0108 20:10:43.943219   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0108 20:10:43.943404   18837 addons.go:237] Setting addon default-storageclass=true in "addons-793365"
	I0108 20:10:43.946331   18837 host.go:66] Checking if "addons-793365" exists ...
	I0108 20:10:43.946992   18837 cli_runner.go:164] Run: docker container inspect addons-793365 --format={{.State.Status}}
	I0108 20:10:43.948938   18837 out.go:177]   - Using image docker.io/registry:2.8.3
	I0108 20:10:43.950370   18837 addons.go:429] installing /etc/kubernetes/addons/registry-rc.yaml
	I0108 20:10:43.950400   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0108 20:10:43.950471   18837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-793365
	I0108 20:10:43.947544   18837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-793365
	I0108 20:10:43.970604   18837 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0108 20:10:43.973433   18837 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0108 20:10:43.977471   18837 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0108 20:10:43.979557   18837 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0108 20:10:43.978826   18837 addons.go:237] Setting addon storage-provisioner-rancher=true in "addons-793365"
	I0108 20:10:43.982193   18837 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0108 20:10:43.982250   18837 host.go:66] Checking if "addons-793365" exists ...
	I0108 20:10:43.983637   18837 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0108 20:10:43.984638   18837 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0108 20:10:43.984764   18837 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0108 20:10:43.985358   18837 cli_runner.go:164] Run: docker container inspect addons-793365 --format={{.State.Status}}
	I0108 20:10:43.986102   18837 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0108 20:10:43.989060   18837 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0108 20:10:43.989175   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0108 20:10:43.989196   18837 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 20:10:43.989217   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0108 20:10:43.989248   18837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-793365
	I0108 20:10:43.989271   18837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-793365
	I0108 20:10:43.989441   18837 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0108 20:10:43.989457   18837 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0108 20:10:43.989566   18837 addons.go:429] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 20:10:43.989605   18837 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 20:10:43.990985   18837 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0108 20:10:43.990993   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 20:10:43.991003   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0108 20:10:43.992416   18837 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0108 20:10:43.992494   18837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-793365
	I0108 20:10:43.996152   18837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-793365
	I0108 20:10:43.997185   18837 addons.go:429] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0108 20:10:43.998389   18837 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 20:10:43.998840   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0108 20:10:43.998857   18837 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:10:44.001777   18837 addons.go:429] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0108 20:10:44.001803   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0108 20:10:44.001871   18837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-793365
	I0108 20:10:44.000313   18837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-793365
	I0108 20:10:44.000323   18837 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 20:10:44.005592   18837 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:10:44.005709   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 20:10:44.005785   18837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-793365
	I0108 20:10:44.006002   18837 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0108 20:10:44.005635   18837 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0108 20:10:44.010711   18837 addons.go:429] installing /etc/kubernetes/addons/deployment.yaml
	I0108 20:10:44.010744   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0108 20:10:44.010821   18837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-793365
	I0108 20:10:44.009250   18837 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 20:10:44.010857   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0108 20:10:44.013430   18837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-793365
	I0108 20:10:44.016413   18837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/addons-793365/id_rsa Username:docker}
	I0108 20:10:44.020655   18837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/addons-793365/id_rsa Username:docker}
	I0108 20:10:44.020928   18837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/addons-793365/id_rsa Username:docker}
	I0108 20:10:44.034001   18837 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 20:10:44.034039   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 20:10:44.034128   18837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-793365
	I0108 20:10:44.056098   18837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/addons-793365/id_rsa Username:docker}
	I0108 20:10:44.061009   18837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/addons-793365/id_rsa Username:docker}
	I0108 20:10:44.061347   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 20:10:44.063933   18837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/addons-793365/id_rsa Username:docker}
	I0108 20:10:44.066135   18837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/addons-793365/id_rsa Username:docker}
	I0108 20:10:44.070039   18837 out.go:177]   - Using image docker.io/busybox:stable
	I0108 20:10:44.071561   18837 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0108 20:10:44.071765   18837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/addons-793365/id_rsa Username:docker}
	I0108 20:10:44.072918   18837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/addons-793365/id_rsa Username:docker}
	I0108 20:10:44.073167   18837 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 20:10:44.073183   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0108 20:10:44.073248   18837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-793365
	I0108 20:10:44.074582   18837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/addons-793365/id_rsa Username:docker}
	I0108 20:10:44.075286   18837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/addons-793365/id_rsa Username:docker}
	I0108 20:10:44.085696   18837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/addons-793365/id_rsa Username:docker}
	I0108 20:10:44.086730   18837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/addons-793365/id_rsa Username:docker}
	W0108 20:10:44.099806   18837 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0108 20:10:44.099867   18837 retry.go:31] will retry after 276.185619ms: ssh: handshake failed: EOF
	I0108 20:10:44.113586   18837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/addons-793365/id_rsa Username:docker}
	W0108 20:10:44.116073   18837 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0108 20:10:44.116102   18837 retry.go:31] will retry after 346.255884ms: ssh: handshake failed: EOF
	I0108 20:10:44.291390   18837 addons.go:429] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0108 20:10:44.291424   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0108 20:10:44.296641   18837 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0108 20:10:44.296691   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0108 20:10:44.390956   18837 addons.go:429] installing /etc/kubernetes/addons/registry-svc.yaml
	I0108 20:10:44.390982   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0108 20:10:44.412198   18837 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-793365" context rescaled to 1 replicas
	I0108 20:10:44.412244   18837 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 20:10:44.414062   18837 out.go:177] * Verifying Kubernetes components...
	I0108 20:10:44.415680   18837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:10:44.498169   18837 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0108 20:10:44.498290   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0108 20:10:44.498716   18837 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 20:10:44.498772   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0108 20:10:44.509746   18837 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0108 20:10:44.509773   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0108 20:10:44.510347   18837 addons.go:429] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0108 20:10:44.510408   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0108 20:10:44.589648   18837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0108 20:10:44.591774   18837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:10:44.597941   18837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 20:10:44.599763   18837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 20:10:44.694310   18837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0108 20:10:44.694696   18837 addons.go:429] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0108 20:10:44.694751   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0108 20:10:44.694990   18837 addons.go:429] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0108 20:10:44.695045   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0108 20:10:44.698365   18837 addons.go:429] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0108 20:10:44.698395   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0108 20:10:44.703287   18837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 20:10:44.790527   18837 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 20:10:44.790562   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 20:10:44.802647   18837 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0108 20:10:44.802738   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0108 20:10:44.811072   18837 addons.go:429] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0108 20:10:44.811104   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0108 20:10:44.893994   18837 addons.go:429] installing /etc/kubernetes/addons/ig-role.yaml
	I0108 20:10:44.894029   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0108 20:10:44.997019   18837 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 20:10:44.997147   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 20:10:45.000029   18837 addons.go:429] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0108 20:10:45.000121   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0108 20:10:45.089064   18837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 20:10:45.098584   18837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0108 20:10:45.098948   18837 addons.go:429] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0108 20:10:45.099001   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0108 20:10:45.192492   18837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 20:10:45.298069   18837 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0108 20:10:45.298157   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0108 20:10:45.305724   18837 addons.go:429] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0108 20:10:45.305819   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0108 20:10:45.312642   18837 addons.go:429] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0108 20:10:45.312750   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0108 20:10:45.407122   18837 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0108 20:10:45.407259   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0108 20:10:45.496236   18837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 20:10:45.690309   18837 addons.go:429] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0108 20:10:45.690361   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0108 20:10:45.706475   18837 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0108 20:10:45.706562   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0108 20:10:45.706886   18837 addons.go:429] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0108 20:10:45.706945   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0108 20:10:45.992273   18837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0108 20:10:45.999063   18837 addons.go:429] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 20:10:45.999089   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0108 20:10:46.288962   18837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 20:10:46.389525   18837 addons.go:429] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0108 20:10:46.389567   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0108 20:10:46.390920   18837 addons.go:429] installing /etc/kubernetes/addons/ig-crd.yaml
	I0108 20:10:46.390946   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0108 20:10:46.604564   18837 addons.go:429] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 20:10:46.604687   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0108 20:10:46.688126   18837 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0108 20:10:46.688210   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0108 20:10:46.695301   18837 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.633920228s)
	I0108 20:10:46.695435   18837 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0108 20:10:46.695505   18837 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.279796257s)
	I0108 20:10:46.696537   18837 node_ready.go:35] waiting up to 6m0s for node "addons-793365" to be "Ready" ...
	I0108 20:10:47.090033   18837 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0108 20:10:47.090061   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0108 20:10:47.189795   18837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 20:10:47.689556   18837 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0108 20:10:47.689651   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0108 20:10:48.390381   18837 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0108 20:10:48.390409   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0108 20:10:48.791600   18837 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 20:10:48.791697   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0108 20:10:48.799772   18837 node_ready.go:58] node "addons-793365" has status "Ready":"False"
	I0108 20:10:49.302184   18837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 20:10:49.900595   18837 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.310833328s)
	I0108 20:10:50.789246   18837 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.191178725s)
	I0108 20:10:50.789246   18837 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.189440026s)
	I0108 20:10:50.789320   18837 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.094929027s)
	I0108 20:10:50.789211   18837 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.197223501s)
	I0108 20:10:50.789360   18837 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.086039082s)
	I0108 20:10:50.897350   18837 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.808176s)
	I0108 20:10:50.897558   18837 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.79886981s)
	I0108 20:10:50.897592   18837 addons.go:473] Verifying addon registry=true in "addons-793365"
	I0108 20:10:50.899424   18837 out.go:177] * Verifying registry addon...
	I0108 20:10:50.902248   18837 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0108 20:10:50.916403   18837 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0108 20:10:50.916449   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:10:51.191218   18837 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0108 20:10:51.191321   18837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-793365
	I0108 20:10:51.201534   18837 node_ready.go:58] node "addons-793365" has status "Ready":"False"
	I0108 20:10:51.221344   18837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/addons-793365/id_rsa Username:docker}
	I0108 20:10:51.411430   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:10:51.614532   18837 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0108 20:10:51.699979   18837 addons.go:237] Setting addon gcp-auth=true in "addons-793365"
	I0108 20:10:51.700043   18837 host.go:66] Checking if "addons-793365" exists ...
	I0108 20:10:51.700539   18837 cli_runner.go:164] Run: docker container inspect addons-793365 --format={{.State.Status}}
	I0108 20:10:51.725302   18837 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0108 20:10:51.725378   18837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-793365
	I0108 20:10:51.748545   18837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/addons-793365/id_rsa Username:docker}
	I0108 20:10:51.907994   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:10:52.103655   18837 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.911006485s)
	I0108 20:10:52.103707   18837 addons.go:473] Verifying addon ingress=true in "addons-793365"
	I0108 20:10:52.103752   18837 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.607452824s)
	I0108 20:10:52.103788   18837 addons.go:473] Verifying addon metrics-server=true in "addons-793365"
	I0108 20:10:52.106577   18837 out.go:177] * Verifying ingress addon...
	I0108 20:10:52.103808   18837 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.111429372s)
	I0108 20:10:52.103928   18837 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.814922501s)
	I0108 20:10:52.104054   18837 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.91401441s)
	I0108 20:10:52.109333   18837 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-793365 service yakd-dashboard -n yakd-dashboard
	
	
	W0108 20:10:52.108036   18837 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 20:10:52.108705   18837 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0108 20:10:52.110724   18837 retry.go:31] will retry after 315.458174ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 20:10:52.115427   18837 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0108 20:10:52.115459   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:10:52.407318   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:10:52.427233   18837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 20:10:52.615173   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:10:52.904106   18837 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.601777699s)
	I0108 20:10:52.904135   18837 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.178797385s)
	I0108 20:10:52.904165   18837 addons.go:473] Verifying addon csi-hostpath-driver=true in "addons-793365"
	I0108 20:10:52.906579   18837 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 20:10:52.908067   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:10:52.910386   18837 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0108 20:10:52.908820   18837 out.go:177] * Verifying csi-hostpath-driver addon...
	I0108 20:10:52.912633   18837 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0108 20:10:52.916320   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0108 20:10:52.917357   18837 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0108 20:10:52.923292   18837 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0108 20:10:52.923320   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:10:52.937826   18837 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0108 20:10:52.937849   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0108 20:10:52.997589   18837 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 20:10:52.997620   18837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0108 20:10:53.016963   18837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 20:10:53.117018   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:10:53.205203   18837 node_ready.go:58] node "addons-793365" has status "Ready":"False"
	I0108 20:10:53.406860   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:10:53.422995   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:10:53.615769   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:10:53.907265   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:10:53.923794   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:10:54.098406   18837 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.671109275s)
	I0108 20:10:54.117058   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:10:54.413383   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:10:54.489156   18837 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.472142508s)
	I0108 20:10:54.490136   18837 addons.go:473] Verifying addon gcp-auth=true in "addons-793365"
	I0108 20:10:54.492449   18837 out.go:177] * Verifying gcp-auth addon...
	I0108 20:10:54.493918   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:10:54.494765   18837 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0108 20:10:54.500984   18837 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0108 20:10:54.501022   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:10:54.615665   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:10:54.906247   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:10:54.922473   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:10:54.999549   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:10:55.116569   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:10:55.408921   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:10:55.422185   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:10:55.499082   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:10:55.615844   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:10:55.700685   18837 node_ready.go:58] node "addons-793365" has status "Ready":"False"
	I0108 20:10:55.907551   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:10:55.990486   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:10:55.998521   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:10:56.117208   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:10:56.407071   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:10:56.422968   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:10:56.499400   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:10:56.615681   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:10:56.908753   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:10:56.924101   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:10:56.999245   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:10:57.115069   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:10:57.408080   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:10:57.421872   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:10:57.501789   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:10:57.615827   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:10:57.907890   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:10:57.922633   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:10:57.998897   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:10:58.115868   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:10:58.200032   18837 node_ready.go:58] node "addons-793365" has status "Ready":"False"
	I0108 20:10:58.407248   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:10:58.423271   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:10:58.499320   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:10:58.615381   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:10:58.907779   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:10:58.924374   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:10:58.999562   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:10:59.117436   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:10:59.406927   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:10:59.421626   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:10:59.499282   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:10:59.614529   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:10:59.907442   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:10:59.923318   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:10:59.999747   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:00.115627   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:00.407303   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:00.423144   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:00.499774   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:00.615657   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:00.701224   18837 node_ready.go:58] node "addons-793365" has status "Ready":"False"
	I0108 20:11:00.907597   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:00.923640   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:00.998389   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:01.115867   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:01.406614   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:01.423787   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:01.498418   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:01.614885   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:01.907271   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:01.921859   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:01.999889   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:02.115542   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:02.407767   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:02.422686   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:02.498570   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:02.616155   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:02.906994   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:02.921496   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:02.999152   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:03.114940   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:03.200577   18837 node_ready.go:58] node "addons-793365" has status "Ready":"False"
	I0108 20:11:03.405764   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:03.422445   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:03.497725   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:03.615203   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:03.906729   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:03.921335   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:04.000318   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:04.115560   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:04.407288   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:04.422065   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:04.497569   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:04.616074   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:04.906882   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:04.922533   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:04.999141   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:05.116111   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:05.407944   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:05.421464   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:05.499188   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:05.614953   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:05.700594   18837 node_ready.go:58] node "addons-793365" has status "Ready":"False"
	I0108 20:11:05.907614   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:05.922263   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:05.998444   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:06.116069   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:06.407640   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:06.423970   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:06.498356   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:06.615660   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:06.907205   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:06.922476   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:06.998838   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:07.115313   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:07.408154   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:07.422393   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:07.498932   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:07.615510   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:07.907076   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:07.922599   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:07.998772   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:08.115610   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:08.199987   18837 node_ready.go:58] node "addons-793365" has status "Ready":"False"
	I0108 20:11:08.408777   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:08.429425   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:08.498681   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:08.615459   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:08.908237   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:08.922641   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:08.999077   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:09.114396   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:09.406694   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:09.422022   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:09.498423   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:09.614732   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:09.906923   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:09.921968   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:09.999730   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:10.116516   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:10.201128   18837 node_ready.go:58] node "addons-793365" has status "Ready":"False"
	I0108 20:11:10.407182   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:10.423028   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:10.498371   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:10.615188   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:10.906641   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:10.923616   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:10.999315   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:11.115207   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:11.406936   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:11.422599   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:11.498298   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:11.615327   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:11.906583   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:11.924157   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:11.999706   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:12.116145   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:12.406509   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:12.422912   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:12.499598   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:12.615691   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:12.699960   18837 node_ready.go:58] node "addons-793365" has status "Ready":"False"
	I0108 20:11:12.906995   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:12.921507   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:12.998976   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:13.115108   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:13.406731   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:13.422064   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:13.499510   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:13.620064   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:13.906601   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:13.924070   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:13.998623   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:14.115286   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:14.406840   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:14.421663   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:14.497967   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:14.615262   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:14.700387   18837 node_ready.go:58] node "addons-793365" has status "Ready":"False"
	I0108 20:11:14.907980   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:14.922533   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:14.998162   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:15.115498   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:15.406737   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:15.422973   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:15.498747   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:15.616252   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:15.908001   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:15.921775   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:15.999599   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:16.115019   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:16.408185   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:16.421775   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:16.498797   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:16.616449   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:16.906384   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:16.922179   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:16.998907   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:17.116204   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:17.200653   18837 node_ready.go:58] node "addons-793365" has status "Ready":"False"
	I0108 20:11:17.407571   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:17.422337   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:17.498408   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:17.615933   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:17.907112   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:17.922062   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:17.998928   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:18.116483   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:18.406280   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:18.421469   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:18.498796   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:18.620238   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:18.907377   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:18.921588   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:18.999110   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:19.115134   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:19.200742   18837 node_ready.go:58] node "addons-793365" has status "Ready":"False"
	I0108 20:11:19.406736   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:19.422262   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:19.499550   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:19.614856   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:19.907391   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:19.921645   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:19.999111   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:20.116509   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:20.199226   18837 node_ready.go:49] node "addons-793365" has status "Ready":"True"
	I0108 20:11:20.199254   18837 node_ready.go:38] duration metric: took 33.502640498s waiting for node "addons-793365" to be "Ready" ...
	I0108 20:11:20.199265   18837 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:11:20.207908   18837 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6bvw6" in "kube-system" namespace to be "Ready" ...
	I0108 20:11:20.421964   18837 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0108 20:11:20.422005   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:20.489868   18837 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0108 20:11:20.489905   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:20.502095   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:20.615896   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:20.909015   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:20.926054   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:21.000028   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:21.115999   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:21.291729   18837 pod_ready.go:92] pod "coredns-5dd5756b68-6bvw6" in "kube-system" namespace has status "Ready":"True"
	I0108 20:11:21.291763   18837 pod_ready.go:81] duration metric: took 1.08381391s waiting for pod "coredns-5dd5756b68-6bvw6" in "kube-system" namespace to be "Ready" ...
	I0108 20:11:21.291793   18837 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-793365" in "kube-system" namespace to be "Ready" ...
	I0108 20:11:21.302105   18837 pod_ready.go:92] pod "etcd-addons-793365" in "kube-system" namespace has status "Ready":"True"
	I0108 20:11:21.302149   18837 pod_ready.go:81] duration metric: took 10.345856ms waiting for pod "etcd-addons-793365" in "kube-system" namespace to be "Ready" ...
	I0108 20:11:21.302178   18837 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-793365" in "kube-system" namespace to be "Ready" ...
	I0108 20:11:21.391464   18837 pod_ready.go:92] pod "kube-apiserver-addons-793365" in "kube-system" namespace has status "Ready":"True"
	I0108 20:11:21.391555   18837 pod_ready.go:81] duration metric: took 89.365944ms waiting for pod "kube-apiserver-addons-793365" in "kube-system" namespace to be "Ready" ...
	I0108 20:11:21.391584   18837 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-793365" in "kube-system" namespace to be "Ready" ...
	I0108 20:11:21.403511   18837 pod_ready.go:92] pod "kube-controller-manager-addons-793365" in "kube-system" namespace has status "Ready":"True"
	I0108 20:11:21.403598   18837 pod_ready.go:81] duration metric: took 11.994892ms waiting for pod "kube-controller-manager-addons-793365" in "kube-system" namespace to be "Ready" ...
	I0108 20:11:21.403626   18837 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qrmpl" in "kube-system" namespace to be "Ready" ...
	I0108 20:11:21.412979   18837 pod_ready.go:92] pod "kube-proxy-qrmpl" in "kube-system" namespace has status "Ready":"True"
	I0108 20:11:21.413072   18837 pod_ready.go:81] duration metric: took 9.426522ms waiting for pod "kube-proxy-qrmpl" in "kube-system" namespace to be "Ready" ...
	I0108 20:11:21.413098   18837 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-793365" in "kube-system" namespace to be "Ready" ...
	I0108 20:11:21.413350   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:21.502073   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:21.507001   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:21.694021   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:21.803867   18837 pod_ready.go:92] pod "kube-scheduler-addons-793365" in "kube-system" namespace has status "Ready":"True"
	I0108 20:11:21.803961   18837 pod_ready.go:81] duration metric: took 390.843503ms waiting for pod "kube-scheduler-addons-793365" in "kube-system" namespace to be "Ready" ...
	I0108 20:11:21.803983   18837 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-7n5fz" in "kube-system" namespace to be "Ready" ...
	I0108 20:11:21.911111   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:22.001060   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:22.002898   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:22.198069   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:22.416649   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:22.491249   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:22.499784   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:22.617600   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:22.908863   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:22.924473   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:23.000156   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:23.116265   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:23.407055   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:23.423847   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:23.499414   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:23.615164   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:23.810958   18837 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7n5fz" in "kube-system" namespace has status "Ready":"False"
	I0108 20:11:23.909388   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:23.925008   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:24.001348   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:24.115318   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:24.407882   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:24.423843   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:24.499529   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:24.616465   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:24.909245   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:24.924482   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:24.999615   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:25.117898   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:25.409177   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:25.423787   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:25.500762   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:25.616534   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:25.812408   18837 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7n5fz" in "kube-system" namespace has status "Ready":"False"
	I0108 20:11:25.908165   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:25.924113   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:26.000116   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:26.115641   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:26.407856   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:26.424282   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:26.501612   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:26.615049   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:26.906615   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:26.923118   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:26.998441   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:27.115128   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:27.408291   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:27.423772   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:27.498981   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:27.616733   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:27.909026   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:27.923602   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:27.999468   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:28.116007   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:28.312369   18837 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7n5fz" in "kube-system" namespace has status "Ready":"False"
	I0108 20:11:28.408709   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:28.423851   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:28.498502   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:28.615287   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:28.908317   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:28.924812   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:29.000528   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:29.115673   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:29.408862   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:29.424250   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:29.499500   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:29.616020   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:29.908756   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:29.924524   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:29.999463   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:30.115344   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:30.407340   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:30.423882   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:30.499504   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:30.615698   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:30.812084   18837 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7n5fz" in "kube-system" namespace has status "Ready":"False"
	I0108 20:11:30.907763   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:30.925195   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:31.000461   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:31.117630   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:31.409133   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:31.424038   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:31.498960   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:31.616999   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:31.907837   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:31.925428   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:31.999783   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:32.116037   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:32.407996   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:32.423985   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:32.499645   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:32.616696   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:32.813134   18837 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7n5fz" in "kube-system" namespace has status "Ready":"False"
	I0108 20:11:32.908786   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:32.923491   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:32.999006   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:33.115394   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:33.408088   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:33.423470   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:33.499694   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:33.616849   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:33.908807   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:33.923747   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:33.999533   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:34.116722   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:34.408372   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:34.422879   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:34.499571   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:34.614519   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:34.906775   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:34.923862   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:34.998545   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:35.116629   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:35.313033   18837 pod_ready.go:92] pod "metrics-server-7c66d45ddc-7n5fz" in "kube-system" namespace has status "Ready":"True"
	I0108 20:11:35.313086   18837 pod_ready.go:81] duration metric: took 13.50909256s waiting for pod "metrics-server-7c66d45ddc-7n5fz" in "kube-system" namespace to be "Ready" ...
	I0108 20:11:35.313109   18837 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-gc69v" in "kube-system" namespace to be "Ready" ...
	I0108 20:11:35.410670   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:35.492278   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:35.503559   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:35.616781   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:35.908874   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:35.924700   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:35.999562   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:36.116532   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:36.407184   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:36.423629   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:36.500647   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:36.616443   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:36.908754   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:36.924560   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:36.999974   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:37.115555   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:37.322509   18837 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-gc69v" in "kube-system" namespace has status "Ready":"False"
	I0108 20:11:37.409135   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:37.426661   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:37.499935   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:37.616337   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:37.908947   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:37.922813   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:38.000646   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:38.114811   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:38.408509   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:38.423816   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:38.499654   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:38.614974   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:38.907507   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:38.923998   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:38.999529   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:39.115133   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:39.408848   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:39.423536   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:39.500341   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:39.616177   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:39.819509   18837 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-gc69v" in "kube-system" namespace has status "Ready":"False"
	I0108 20:11:39.912475   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:39.929016   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:39.998823   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:40.115301   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:40.407845   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:40.424055   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:40.502476   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:40.616310   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:40.907873   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:40.924771   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:40.999417   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:41.115300   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:41.408758   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:41.424245   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:41.498795   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:41.616435   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:41.820660   18837 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-gc69v" in "kube-system" namespace has status "Ready":"False"
	I0108 20:11:41.908247   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:41.923898   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:41.997993   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:42.115280   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:42.408232   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:42.423984   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:42.499075   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:42.615923   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:42.907912   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:42.925006   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:42.999013   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:43.117728   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:43.411268   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:43.495474   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:43.499299   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:43.696780   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:43.895421   18837 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-gc69v" in "kube-system" namespace has status "Ready":"False"
	I0108 20:11:43.909873   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:43.996447   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:43.999865   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:44.117072   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:44.407387   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:44.423600   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:44.499628   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:44.617106   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:44.907437   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:44.925126   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:44.999953   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:45.116804   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:45.408678   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:45.490079   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:45.499471   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:45.616176   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:45.908305   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:45.925504   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:45.999546   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:46.115972   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:46.321032   18837 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-gc69v" in "kube-system" namespace has status "Ready":"False"
	I0108 20:11:46.408034   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:46.425205   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:46.499027   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:46.616006   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:46.908340   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:46.922267   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:47.000148   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:47.115516   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:47.407339   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:47.424104   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:47.498698   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:47.616236   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:47.907273   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:47.923995   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:47.999749   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:48.115756   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:48.321930   18837 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-gc69v" in "kube-system" namespace has status "Ready":"False"
	I0108 20:11:48.408586   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:48.425371   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:48.498397   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:48.615710   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:48.820123   18837 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-gc69v" in "kube-system" namespace has status "Ready":"True"
	I0108 20:11:48.820156   18837 pod_ready.go:81] duration metric: took 13.507038022s waiting for pod "nvidia-device-plugin-daemonset-gc69v" in "kube-system" namespace to be "Ready" ...
	I0108 20:11:48.820186   18837 pod_ready.go:38] duration metric: took 28.620907624s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:11:48.820210   18837 api_server.go:52] waiting for apiserver process to appear ...
	I0108 20:11:48.820264   18837 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 20:11:48.820353   18837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 20:11:48.858437   18837 cri.go:89] found id: "ec40dbb2e1d20208f369b5aa433ab66f1ea4f39e4c5e6fb27cf23cd146c9adda"
	I0108 20:11:48.858470   18837 cri.go:89] found id: ""
	I0108 20:11:48.858480   18837 logs.go:284] 1 containers: [ec40dbb2e1d20208f369b5aa433ab66f1ea4f39e4c5e6fb27cf23cd146c9adda]
	I0108 20:11:48.858524   18837 ssh_runner.go:195] Run: which crictl
	I0108 20:11:48.862212   18837 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 20:11:48.862293   18837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 20:11:48.907204   18837 cri.go:89] found id: "398cbc4f28a7aa4c384b22353b71667c62316143a912fef1ef6b57e8bf5aa135"
	I0108 20:11:48.907237   18837 cri.go:89] found id: ""
	I0108 20:11:48.907249   18837 logs.go:284] 1 containers: [398cbc4f28a7aa4c384b22353b71667c62316143a912fef1ef6b57e8bf5aa135]
	I0108 20:11:48.907331   18837 ssh_runner.go:195] Run: which crictl
	I0108 20:11:48.907934   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:48.912228   18837 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 20:11:48.912311   18837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 20:11:48.924150   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:48.991499   18837 cri.go:89] found id: "634fe4b1d0efd8e41ef6a4052c67f46af5c719c5012ebd150db5afd83efed121"
	I0108 20:11:48.991538   18837 cri.go:89] found id: ""
	I0108 20:11:48.991551   18837 logs.go:284] 1 containers: [634fe4b1d0efd8e41ef6a4052c67f46af5c719c5012ebd150db5afd83efed121]
	I0108 20:11:48.991625   18837 ssh_runner.go:195] Run: which crictl
	I0108 20:11:48.995907   18837 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 20:11:48.995992   18837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 20:11:48.999530   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:49.038884   18837 cri.go:89] found id: "3bdace17e17259fd9aa05edbe26afe705ec93caa5f345d06c74962c7d989e62e"
	I0108 20:11:49.038913   18837 cri.go:89] found id: ""
	I0108 20:11:49.038925   18837 logs.go:284] 1 containers: [3bdace17e17259fd9aa05edbe26afe705ec93caa5f345d06c74962c7d989e62e]
	I0108 20:11:49.038989   18837 ssh_runner.go:195] Run: which crictl
	I0108 20:11:49.042970   18837 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 20:11:49.043052   18837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 20:11:49.117073   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:49.120873   18837 cri.go:89] found id: "d0f7f49c20d28246f311946b0485d727795502e4d6aa08f5338f499a8edc1b54"
	I0108 20:11:49.120908   18837 cri.go:89] found id: ""
	I0108 20:11:49.120919   18837 logs.go:284] 1 containers: [d0f7f49c20d28246f311946b0485d727795502e4d6aa08f5338f499a8edc1b54]
	I0108 20:11:49.120983   18837 ssh_runner.go:195] Run: which crictl
	I0108 20:11:49.124684   18837 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 20:11:49.124756   18837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 20:11:49.207208   18837 cri.go:89] found id: "6443d45663b4c00bd6ede8cab6e9f5b22048738c0e3b550630dc088cbba6f6ae"
	I0108 20:11:49.207233   18837 cri.go:89] found id: ""
	I0108 20:11:49.207241   18837 logs.go:284] 1 containers: [6443d45663b4c00bd6ede8cab6e9f5b22048738c0e3b550630dc088cbba6f6ae]
	I0108 20:11:49.207307   18837 ssh_runner.go:195] Run: which crictl
	I0108 20:11:49.211465   18837 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 20:11:49.211530   18837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 20:11:49.291559   18837 cri.go:89] found id: "721e913055e08ffd2cca7e01a85166661c614ad18795d89fd59db3831086728b"
	I0108 20:11:49.291592   18837 cri.go:89] found id: ""
	I0108 20:11:49.291606   18837 logs.go:284] 1 containers: [721e913055e08ffd2cca7e01a85166661c614ad18795d89fd59db3831086728b]
	I0108 20:11:49.291666   18837 ssh_runner.go:195] Run: which crictl
	I0108 20:11:49.295297   18837 logs.go:123] Gathering logs for dmesg ...
	I0108 20:11:49.295318   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 20:11:49.308680   18837 logs.go:123] Gathering logs for kube-apiserver [ec40dbb2e1d20208f369b5aa433ab66f1ea4f39e4c5e6fb27cf23cd146c9adda] ...
	I0108 20:11:49.308713   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec40dbb2e1d20208f369b5aa433ab66f1ea4f39e4c5e6fb27cf23cd146c9adda"
	I0108 20:11:49.363458   18837 logs.go:123] Gathering logs for coredns [634fe4b1d0efd8e41ef6a4052c67f46af5c719c5012ebd150db5afd83efed121] ...
	I0108 20:11:49.363492   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 634fe4b1d0efd8e41ef6a4052c67f46af5c719c5012ebd150db5afd83efed121"
	I0108 20:11:49.408972   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:49.424859   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:49.433203   18837 logs.go:123] Gathering logs for CRI-O ...
	I0108 20:11:49.433240   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 20:11:49.499036   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:49.550867   18837 logs.go:123] Gathering logs for kubelet ...
	I0108 20:11:49.550923   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 20:11:49.616569   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:49.644718   18837 logs.go:123] Gathering logs for describe nodes ...
	I0108 20:11:49.644776   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 20:11:49.841948   18837 logs.go:123] Gathering logs for etcd [398cbc4f28a7aa4c384b22353b71667c62316143a912fef1ef6b57e8bf5aa135] ...
	I0108 20:11:49.841977   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 398cbc4f28a7aa4c384b22353b71667c62316143a912fef1ef6b57e8bf5aa135"
	I0108 20:11:49.909390   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:49.925501   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:49.958027   18837 logs.go:123] Gathering logs for kube-scheduler [3bdace17e17259fd9aa05edbe26afe705ec93caa5f345d06c74962c7d989e62e] ...
	I0108 20:11:49.958066   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bdace17e17259fd9aa05edbe26afe705ec93caa5f345d06c74962c7d989e62e"
	I0108 20:11:50.001356   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:50.038108   18837 logs.go:123] Gathering logs for kube-proxy [d0f7f49c20d28246f311946b0485d727795502e4d6aa08f5338f499a8edc1b54] ...
	I0108 20:11:50.038146   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0f7f49c20d28246f311946b0485d727795502e4d6aa08f5338f499a8edc1b54"
	I0108 20:11:50.116065   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:50.118480   18837 logs.go:123] Gathering logs for kube-controller-manager [6443d45663b4c00bd6ede8cab6e9f5b22048738c0e3b550630dc088cbba6f6ae] ...
	I0108 20:11:50.118525   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6443d45663b4c00bd6ede8cab6e9f5b22048738c0e3b550630dc088cbba6f6ae"
	I0108 20:11:50.183043   18837 logs.go:123] Gathering logs for kindnet [721e913055e08ffd2cca7e01a85166661c614ad18795d89fd59db3831086728b] ...
	I0108 20:11:50.183091   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 721e913055e08ffd2cca7e01a85166661c614ad18795d89fd59db3831086728b"
	I0108 20:11:50.226617   18837 logs.go:123] Gathering logs for container status ...
	I0108 20:11:50.226644   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 20:11:50.407850   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:50.430108   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:50.498563   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:50.618220   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:50.909687   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:50.923193   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:50.999052   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:51.116093   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:51.407414   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:51.423940   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:51.498408   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:51.615653   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:51.909570   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:51.924429   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:51.998919   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:52.116877   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:52.407711   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:52.424586   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:52.499972   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:52.616960   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:52.801914   18837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:11:52.816950   18837 api_server.go:72] duration metric: took 1m8.40467207s to wait for apiserver process to appear ...
	I0108 20:11:52.816981   18837 api_server.go:88] waiting for apiserver healthz status ...
	I0108 20:11:52.817022   18837 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 20:11:52.817079   18837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 20:11:52.862072   18837 cri.go:89] found id: "ec40dbb2e1d20208f369b5aa433ab66f1ea4f39e4c5e6fb27cf23cd146c9adda"
	I0108 20:11:52.862118   18837 cri.go:89] found id: ""
	I0108 20:11:52.862133   18837 logs.go:284] 1 containers: [ec40dbb2e1d20208f369b5aa433ab66f1ea4f39e4c5e6fb27cf23cd146c9adda]
	I0108 20:11:52.862213   18837 ssh_runner.go:195] Run: which crictl
	I0108 20:11:52.892481   18837 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 20:11:52.892584   18837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 20:11:52.908689   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:52.923703   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:52.940560   18837 cri.go:89] found id: "398cbc4f28a7aa4c384b22353b71667c62316143a912fef1ef6b57e8bf5aa135"
	I0108 20:11:52.940588   18837 cri.go:89] found id: ""
	I0108 20:11:52.940597   18837 logs.go:284] 1 containers: [398cbc4f28a7aa4c384b22353b71667c62316143a912fef1ef6b57e8bf5aa135]
	I0108 20:11:52.940647   18837 ssh_runner.go:195] Run: which crictl
	I0108 20:11:52.989280   18837 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 20:11:52.989335   18837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 20:11:52.999758   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:53.027430   18837 cri.go:89] found id: "634fe4b1d0efd8e41ef6a4052c67f46af5c719c5012ebd150db5afd83efed121"
	I0108 20:11:53.027458   18837 cri.go:89] found id: ""
	I0108 20:11:53.027468   18837 logs.go:284] 1 containers: [634fe4b1d0efd8e41ef6a4052c67f46af5c719c5012ebd150db5afd83efed121]
	I0108 20:11:53.027528   18837 ssh_runner.go:195] Run: which crictl
	I0108 20:11:53.032022   18837 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 20:11:53.032108   18837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 20:11:53.106326   18837 cri.go:89] found id: "3bdace17e17259fd9aa05edbe26afe705ec93caa5f345d06c74962c7d989e62e"
	I0108 20:11:53.106355   18837 cri.go:89] found id: ""
	I0108 20:11:53.106373   18837 logs.go:284] 1 containers: [3bdace17e17259fd9aa05edbe26afe705ec93caa5f345d06c74962c7d989e62e]
	I0108 20:11:53.106429   18837 ssh_runner.go:195] Run: which crictl
	I0108 20:11:53.109678   18837 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 20:11:53.109739   18837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 20:11:53.116010   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:53.195921   18837 cri.go:89] found id: "d0f7f49c20d28246f311946b0485d727795502e4d6aa08f5338f499a8edc1b54"
	I0108 20:11:53.195950   18837 cri.go:89] found id: ""
	I0108 20:11:53.195962   18837 logs.go:284] 1 containers: [d0f7f49c20d28246f311946b0485d727795502e4d6aa08f5338f499a8edc1b54]
	I0108 20:11:53.196037   18837 ssh_runner.go:195] Run: which crictl
	I0108 20:11:53.200410   18837 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 20:11:53.200536   18837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 20:11:53.243420   18837 cri.go:89] found id: "6443d45663b4c00bd6ede8cab6e9f5b22048738c0e3b550630dc088cbba6f6ae"
	I0108 20:11:53.243451   18837 cri.go:89] found id: ""
	I0108 20:11:53.243461   18837 logs.go:284] 1 containers: [6443d45663b4c00bd6ede8cab6e9f5b22048738c0e3b550630dc088cbba6f6ae]
	I0108 20:11:53.243518   18837 ssh_runner.go:195] Run: which crictl
	I0108 20:11:53.247769   18837 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 20:11:53.247870   18837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 20:11:53.328656   18837 cri.go:89] found id: "721e913055e08ffd2cca7e01a85166661c614ad18795d89fd59db3831086728b"
	I0108 20:11:53.328690   18837 cri.go:89] found id: ""
	I0108 20:11:53.328701   18837 logs.go:284] 1 containers: [721e913055e08ffd2cca7e01a85166661c614ad18795d89fd59db3831086728b]
	I0108 20:11:53.328782   18837 ssh_runner.go:195] Run: which crictl
	I0108 20:11:53.333195   18837 logs.go:123] Gathering logs for describe nodes ...
	I0108 20:11:53.333224   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 20:11:53.411964   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:53.424029   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:53.498165   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:53.612172   18837 logs.go:123] Gathering logs for container status ...
	I0108 20:11:53.612220   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 20:11:53.617042   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:53.724642   18837 logs.go:123] Gathering logs for coredns [634fe4b1d0efd8e41ef6a4052c67f46af5c719c5012ebd150db5afd83efed121] ...
	I0108 20:11:53.724686   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 634fe4b1d0efd8e41ef6a4052c67f46af5c719c5012ebd150db5afd83efed121"
	I0108 20:11:53.763253   18837 logs.go:123] Gathering logs for kube-scheduler [3bdace17e17259fd9aa05edbe26afe705ec93caa5f345d06c74962c7d989e62e] ...
	I0108 20:11:53.763289   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bdace17e17259fd9aa05edbe26afe705ec93caa5f345d06c74962c7d989e62e"
	I0108 20:11:53.829282   18837 logs.go:123] Gathering logs for kube-proxy [d0f7f49c20d28246f311946b0485d727795502e4d6aa08f5338f499a8edc1b54] ...
	I0108 20:11:53.829332   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0f7f49c20d28246f311946b0485d727795502e4d6aa08f5338f499a8edc1b54"
	I0108 20:11:53.893128   18837 logs.go:123] Gathering logs for kube-controller-manager [6443d45663b4c00bd6ede8cab6e9f5b22048738c0e3b550630dc088cbba6f6ae] ...
	I0108 20:11:53.893169   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6443d45663b4c00bd6ede8cab6e9f5b22048738c0e3b550630dc088cbba6f6ae"
	I0108 20:11:53.908610   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:53.924202   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:53.965592   18837 logs.go:123] Gathering logs for kubelet ...
	I0108 20:11:53.965647   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 20:11:54.000683   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:54.069198   18837 logs.go:123] Gathering logs for dmesg ...
	I0108 20:11:54.069239   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 20:11:54.102268   18837 logs.go:123] Gathering logs for kube-apiserver [ec40dbb2e1d20208f369b5aa433ab66f1ea4f39e4c5e6fb27cf23cd146c9adda] ...
	I0108 20:11:54.102300   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec40dbb2e1d20208f369b5aa433ab66f1ea4f39e4c5e6fb27cf23cd146c9adda"
	I0108 20:11:54.117931   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:54.161028   18837 logs.go:123] Gathering logs for etcd [398cbc4f28a7aa4c384b22353b71667c62316143a912fef1ef6b57e8bf5aa135] ...
	I0108 20:11:54.161095   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 398cbc4f28a7aa4c384b22353b71667c62316143a912fef1ef6b57e8bf5aa135"
	I0108 20:11:54.254372   18837 logs.go:123] Gathering logs for kindnet [721e913055e08ffd2cca7e01a85166661c614ad18795d89fd59db3831086728b] ...
	I0108 20:11:54.254411   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 721e913055e08ffd2cca7e01a85166661c614ad18795d89fd59db3831086728b"
	I0108 20:11:54.304653   18837 logs.go:123] Gathering logs for CRI-O ...
	I0108 20:11:54.304691   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 20:11:54.407734   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:11:54.423541   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:54.499241   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:54.615471   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:54.907588   18837 kapi.go:107] duration metric: took 1m4.005337302s to wait for kubernetes.io/minikube-addons=registry ...
	I0108 20:11:54.923255   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:54.999340   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:55.115017   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:55.424072   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:55.498575   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:55.616927   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:55.924053   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:55.998977   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:56.117180   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:56.423727   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:56.501832   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:56.616203   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:56.882710   18837 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0108 20:11:56.887904   18837 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0108 20:11:56.889227   18837 api_server.go:141] control plane version: v1.28.4
	I0108 20:11:56.889254   18837 api_server.go:131] duration metric: took 4.072266664s to wait for apiserver health ...
	I0108 20:11:56.889263   18837 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 20:11:56.889284   18837 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 20:11:56.889336   18837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 20:11:56.924107   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:56.933789   18837 cri.go:89] found id: "ec40dbb2e1d20208f369b5aa433ab66f1ea4f39e4c5e6fb27cf23cd146c9adda"
	I0108 20:11:56.933825   18837 cri.go:89] found id: ""
	I0108 20:11:56.933838   18837 logs.go:284] 1 containers: [ec40dbb2e1d20208f369b5aa433ab66f1ea4f39e4c5e6fb27cf23cd146c9adda]
	I0108 20:11:56.933935   18837 ssh_runner.go:195] Run: which crictl
	I0108 20:11:56.938449   18837 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 20:11:56.938549   18837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 20:11:56.999938   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:57.013722   18837 cri.go:89] found id: "398cbc4f28a7aa4c384b22353b71667c62316143a912fef1ef6b57e8bf5aa135"
	I0108 20:11:57.013758   18837 cri.go:89] found id: ""
	I0108 20:11:57.013769   18837 logs.go:284] 1 containers: [398cbc4f28a7aa4c384b22353b71667c62316143a912fef1ef6b57e8bf5aa135]
	I0108 20:11:57.013850   18837 ssh_runner.go:195] Run: which crictl
	I0108 20:11:57.018308   18837 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 20:11:57.018440   18837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 20:11:57.095819   18837 cri.go:89] found id: "634fe4b1d0efd8e41ef6a4052c67f46af5c719c5012ebd150db5afd83efed121"
	I0108 20:11:57.095855   18837 cri.go:89] found id: ""
	I0108 20:11:57.095868   18837 logs.go:284] 1 containers: [634fe4b1d0efd8e41ef6a4052c67f46af5c719c5012ebd150db5afd83efed121]
	I0108 20:11:57.095944   18837 ssh_runner.go:195] Run: which crictl
	I0108 20:11:57.100304   18837 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 20:11:57.100364   18837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 20:11:57.116549   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:57.148447   18837 cri.go:89] found id: "3bdace17e17259fd9aa05edbe26afe705ec93caa5f345d06c74962c7d989e62e"
	I0108 20:11:57.148487   18837 cri.go:89] found id: ""
	I0108 20:11:57.148502   18837 logs.go:284] 1 containers: [3bdace17e17259fd9aa05edbe26afe705ec93caa5f345d06c74962c7d989e62e]
	I0108 20:11:57.148566   18837 ssh_runner.go:195] Run: which crictl
	I0108 20:11:57.152588   18837 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 20:11:57.152686   18837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 20:11:57.226539   18837 cri.go:89] found id: "d0f7f49c20d28246f311946b0485d727795502e4d6aa08f5338f499a8edc1b54"
	I0108 20:11:57.226572   18837 cri.go:89] found id: ""
	I0108 20:11:57.226581   18837 logs.go:284] 1 containers: [d0f7f49c20d28246f311946b0485d727795502e4d6aa08f5338f499a8edc1b54]
	I0108 20:11:57.226652   18837 ssh_runner.go:195] Run: which crictl
	I0108 20:11:57.231265   18837 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 20:11:57.231347   18837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 20:11:57.318754   18837 cri.go:89] found id: "6443d45663b4c00bd6ede8cab6e9f5b22048738c0e3b550630dc088cbba6f6ae"
	I0108 20:11:57.318778   18837 cri.go:89] found id: ""
	I0108 20:11:57.318788   18837 logs.go:284] 1 containers: [6443d45663b4c00bd6ede8cab6e9f5b22048738c0e3b550630dc088cbba6f6ae]
	I0108 20:11:57.318854   18837 ssh_runner.go:195] Run: which crictl
	I0108 20:11:57.323442   18837 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 20:11:57.323503   18837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 20:11:57.411059   18837 cri.go:89] found id: "721e913055e08ffd2cca7e01a85166661c614ad18795d89fd59db3831086728b"
	I0108 20:11:57.411092   18837 cri.go:89] found id: ""
	I0108 20:11:57.411104   18837 logs.go:284] 1 containers: [721e913055e08ffd2cca7e01a85166661c614ad18795d89fd59db3831086728b]
	I0108 20:11:57.411175   18837 ssh_runner.go:195] Run: which crictl
	I0108 20:11:57.415279   18837 logs.go:123] Gathering logs for etcd [398cbc4f28a7aa4c384b22353b71667c62316143a912fef1ef6b57e8bf5aa135] ...
	I0108 20:11:57.415324   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 398cbc4f28a7aa4c384b22353b71667c62316143a912fef1ef6b57e8bf5aa135"
	I0108 20:11:57.428138   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:57.500898   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:57.547807   18837 logs.go:123] Gathering logs for kube-proxy [d0f7f49c20d28246f311946b0485d727795502e4d6aa08f5338f499a8edc1b54] ...
	I0108 20:11:57.547866   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0f7f49c20d28246f311946b0485d727795502e4d6aa08f5338f499a8edc1b54"
	I0108 20:11:57.614516   18837 logs.go:123] Gathering logs for kindnet [721e913055e08ffd2cca7e01a85166661c614ad18795d89fd59db3831086728b] ...
	I0108 20:11:57.614548   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 721e913055e08ffd2cca7e01a85166661c614ad18795d89fd59db3831086728b"
	I0108 20:11:57.615917   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:57.703039   18837 logs.go:123] Gathering logs for CRI-O ...
	I0108 20:11:57.703074   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 20:11:57.791434   18837 logs.go:123] Gathering logs for container status ...
	I0108 20:11:57.791496   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 20:11:57.902456   18837 logs.go:123] Gathering logs for kube-apiserver [ec40dbb2e1d20208f369b5aa433ab66f1ea4f39e4c5e6fb27cf23cd146c9adda] ...
	I0108 20:11:57.902491   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec40dbb2e1d20208f369b5aa433ab66f1ea4f39e4c5e6fb27cf23cd146c9adda"
	I0108 20:11:57.925611   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:57.999689   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:58.021530   18837 logs.go:123] Gathering logs for dmesg ...
	I0108 20:11:58.021587   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 20:11:58.035938   18837 logs.go:123] Gathering logs for describe nodes ...
	I0108 20:11:58.035974   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 20:11:58.115665   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:58.146234   18837 logs.go:123] Gathering logs for coredns [634fe4b1d0efd8e41ef6a4052c67f46af5c719c5012ebd150db5afd83efed121] ...
	I0108 20:11:58.146270   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 634fe4b1d0efd8e41ef6a4052c67f46af5c719c5012ebd150db5afd83efed121"
	I0108 20:11:58.186503   18837 logs.go:123] Gathering logs for kube-scheduler [3bdace17e17259fd9aa05edbe26afe705ec93caa5f345d06c74962c7d989e62e] ...
	I0108 20:11:58.186559   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bdace17e17259fd9aa05edbe26afe705ec93caa5f345d06c74962c7d989e62e"
	I0108 20:11:58.234033   18837 logs.go:123] Gathering logs for kube-controller-manager [6443d45663b4c00bd6ede8cab6e9f5b22048738c0e3b550630dc088cbba6f6ae] ...
	I0108 20:11:58.234095   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6443d45663b4c00bd6ede8cab6e9f5b22048738c0e3b550630dc088cbba6f6ae"
	I0108 20:11:58.302977   18837 logs.go:123] Gathering logs for kubelet ...
	I0108 20:11:58.303025   18837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 20:11:58.492665   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:58.501064   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:58.616584   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:58.925859   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:58.999219   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:59.117018   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:59.424106   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:59.499777   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:11:59.616466   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:11:59.924805   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:11:59.999700   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:00.116760   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:00.424948   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:00.499598   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:00.617162   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:00.924554   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:00.979890   18837 system_pods.go:59] 19 kube-system pods found
	I0108 20:12:00.979937   18837 system_pods.go:61] "coredns-5dd5756b68-6bvw6" [0ff70b46-18cd-4b30-96bf-23fea08dde9d] Running
	I0108 20:12:00.979944   18837 system_pods.go:61] "csi-hostpath-attacher-0" [b84e6467-90c0-4d7e-be61-3bee5d204618] Running
	I0108 20:12:00.979950   18837 system_pods.go:61] "csi-hostpath-resizer-0" [cf87d11f-b7a2-4757-ad16-528e4aa87a4f] Running
	I0108 20:12:00.979960   18837 system_pods.go:61] "csi-hostpathplugin-t5qwl" [925ca09d-92c3-4f24-b565-ea6d5916736b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0108 20:12:00.979966   18837 system_pods.go:61] "etcd-addons-793365" [8b47e957-13e8-44d9-a60d-22e4869681de] Running
	I0108 20:12:00.979973   18837 system_pods.go:61] "kindnet-4bvxl" [803d637e-c15e-4c2f-a41f-1def1563e46d] Running
	I0108 20:12:00.979978   18837 system_pods.go:61] "kube-apiserver-addons-793365" [194b6374-d57b-43b4-a81d-8545866b9c3a] Running
	I0108 20:12:00.979983   18837 system_pods.go:61] "kube-controller-manager-addons-793365" [b2700e50-e81f-48a6-9bda-81a2f0ad6755] Running
	I0108 20:12:00.979989   18837 system_pods.go:61] "kube-ingress-dns-minikube" [d08b09ea-8398-44e8-b330-9bb9f48906b0] Running
	I0108 20:12:00.979998   18837 system_pods.go:61] "kube-proxy-qrmpl" [d1d9a195-c6a6-4cf6-b30b-bc3fe24e1dbc] Running
	I0108 20:12:00.980003   18837 system_pods.go:61] "kube-scheduler-addons-793365" [2fcdecb6-80c8-4c95-81ce-d1f915da3986] Running
	I0108 20:12:00.980010   18837 system_pods.go:61] "metrics-server-7c66d45ddc-7n5fz" [b8ce805d-d383-4dbe-a9c3-6e15a815be3d] Running
	I0108 20:12:00.980016   18837 system_pods.go:61] "nvidia-device-plugin-daemonset-gc69v" [abbcf562-fe67-4f2e-a81c-13b335b4c501] Running
	I0108 20:12:00.980026   18837 system_pods.go:61] "registry-proxy-pr7pt" [0eff1f23-2361-4088-8ccf-5c2e1e86e104] Running
	I0108 20:12:00.980036   18837 system_pods.go:61] "registry-xpwjx" [44a1b58b-e22d-43ec-aac2-050524a7e3e5] Running
	I0108 20:12:00.980049   18837 system_pods.go:61] "snapshot-controller-58dbcc7b99-9xr4w" [79ad6761-5d8f-4a5e-85e7-d900a506f249] Running
	I0108 20:12:00.980059   18837 system_pods.go:61] "snapshot-controller-58dbcc7b99-c8nhd" [f52cccf2-fb8c-4ac6-a8dd-7bb3cac0639d] Running
	I0108 20:12:00.980069   18837 system_pods.go:61] "storage-provisioner" [6b800727-bc2f-4b78-9114-541a39b476e4] Running
	I0108 20:12:00.980076   18837 system_pods.go:61] "tiller-deploy-7b677967b9-27plq" [6100538b-a718-4be8-9ca1-bb2d3bfa9ce5] Running
	I0108 20:12:00.980090   18837 system_pods.go:74] duration metric: took 4.090819576s to wait for pod list to return data ...
	I0108 20:12:00.980106   18837 default_sa.go:34] waiting for default service account to be created ...
	I0108 20:12:00.983464   18837 default_sa.go:45] found service account: "default"
	I0108 20:12:00.983503   18837 default_sa.go:55] duration metric: took 3.38622ms for default service account to be created ...
	I0108 20:12:00.983513   18837 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 20:12:00.998495   18837 system_pods.go:86] 19 kube-system pods found
	I0108 20:12:00.998531   18837 system_pods.go:89] "coredns-5dd5756b68-6bvw6" [0ff70b46-18cd-4b30-96bf-23fea08dde9d] Running
	I0108 20:12:00.998539   18837 system_pods.go:89] "csi-hostpath-attacher-0" [b84e6467-90c0-4d7e-be61-3bee5d204618] Running
	I0108 20:12:00.998545   18837 system_pods.go:89] "csi-hostpath-resizer-0" [cf87d11f-b7a2-4757-ad16-528e4aa87a4f] Running
	I0108 20:12:00.998557   18837 system_pods.go:89] "csi-hostpathplugin-t5qwl" [925ca09d-92c3-4f24-b565-ea6d5916736b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0108 20:12:00.998564   18837 system_pods.go:89] "etcd-addons-793365" [8b47e957-13e8-44d9-a60d-22e4869681de] Running
	I0108 20:12:00.998573   18837 system_pods.go:89] "kindnet-4bvxl" [803d637e-c15e-4c2f-a41f-1def1563e46d] Running
	I0108 20:12:00.998581   18837 system_pods.go:89] "kube-apiserver-addons-793365" [194b6374-d57b-43b4-a81d-8545866b9c3a] Running
	I0108 20:12:00.998590   18837 system_pods.go:89] "kube-controller-manager-addons-793365" [b2700e50-e81f-48a6-9bda-81a2f0ad6755] Running
	I0108 20:12:00.998606   18837 system_pods.go:89] "kube-ingress-dns-minikube" [d08b09ea-8398-44e8-b330-9bb9f48906b0] Running
	I0108 20:12:00.998616   18837 system_pods.go:89] "kube-proxy-qrmpl" [d1d9a195-c6a6-4cf6-b30b-bc3fe24e1dbc] Running
	I0108 20:12:00.998626   18837 system_pods.go:89] "kube-scheduler-addons-793365" [2fcdecb6-80c8-4c95-81ce-d1f915da3986] Running
	I0108 20:12:00.998640   18837 system_pods.go:89] "metrics-server-7c66d45ddc-7n5fz" [b8ce805d-d383-4dbe-a9c3-6e15a815be3d] Running
	I0108 20:12:00.998651   18837 system_pods.go:89] "nvidia-device-plugin-daemonset-gc69v" [abbcf562-fe67-4f2e-a81c-13b335b4c501] Running
	I0108 20:12:00.998661   18837 system_pods.go:89] "registry-proxy-pr7pt" [0eff1f23-2361-4088-8ccf-5c2e1e86e104] Running
	I0108 20:12:00.998668   18837 system_pods.go:89] "registry-xpwjx" [44a1b58b-e22d-43ec-aac2-050524a7e3e5] Running
	I0108 20:12:00.998678   18837 system_pods.go:89] "snapshot-controller-58dbcc7b99-9xr4w" [79ad6761-5d8f-4a5e-85e7-d900a506f249] Running
	I0108 20:12:00.998688   18837 system_pods.go:89] "snapshot-controller-58dbcc7b99-c8nhd" [f52cccf2-fb8c-4ac6-a8dd-7bb3cac0639d] Running
	I0108 20:12:00.998697   18837 system_pods.go:89] "storage-provisioner" [6b800727-bc2f-4b78-9114-541a39b476e4] Running
	I0108 20:12:00.998708   18837 system_pods.go:89] "tiller-deploy-7b677967b9-27plq" [6100538b-a718-4be8-9ca1-bb2d3bfa9ce5] Running
	I0108 20:12:00.998719   18837 system_pods.go:126] duration metric: took 15.199865ms to wait for k8s-apps to be running ...
	I0108 20:12:00.998731   18837 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 20:12:00.998782   18837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:12:00.998905   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:01.010844   18837 system_svc.go:56] duration metric: took 12.099419ms WaitForService to wait for kubelet.
	I0108 20:12:01.010883   18837 kubeadm.go:581] duration metric: took 1m16.598611297s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 20:12:01.010912   18837 node_conditions.go:102] verifying NodePressure condition ...
	I0108 20:12:01.014605   18837 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 20:12:01.014641   18837 node_conditions.go:123] node cpu capacity is 8
	I0108 20:12:01.014656   18837 node_conditions.go:105] duration metric: took 3.737982ms to run NodePressure ...
	I0108 20:12:01.014673   18837 start.go:228] waiting for startup goroutines ...
	I0108 20:12:01.116239   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:01.422738   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:01.499387   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:01.615292   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:01.924378   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:01.998847   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:02.116480   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:02.425443   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:02.499842   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:02.616979   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:02.923909   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:02.999891   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:03.116709   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:03.424281   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:03.499071   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:03.615925   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:03.925815   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:03.999182   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:04.115817   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:04.423423   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:04.498940   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:04.615854   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:04.924324   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:04.998901   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:05.116320   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:05.494547   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:05.501832   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:12:05.692165   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:05.994487   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:06.000804   18837 kapi.go:107] duration metric: took 1m11.50598717s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0108 20:12:06.003507   18837 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-793365 cluster.
	I0108 20:12:06.005098   18837 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0108 20:12:06.007035   18837 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0108 20:12:06.115516   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:06.493243   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:06.689663   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:06.923728   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:07.118407   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:07.423765   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:07.616262   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:07.922905   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:08.115788   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:08.424126   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:08.616507   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:08.924776   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:09.116141   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:09.424952   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:09.616894   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:09.925105   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:10.117183   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:10.422864   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:10.616341   18837 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:12:10.924252   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:11.190308   18837 kapi.go:107] duration metric: took 1m19.081595895s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0108 20:12:11.424957   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:11.924115   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:12.423780   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:12.922433   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:13.423144   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:13.924754   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:14.424065   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:14.923116   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:15.423594   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:15.923326   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:16.423661   18837 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:12:16.924623   18837 kapi.go:107] duration metric: took 1m24.007261154s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0108 20:12:16.926917   18837 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, helm-tiller, storage-provisioner, default-storageclass, storage-provisioner-rancher, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0108 20:12:16.928448   18837 addons.go:508] enable addons completed in 1m33.040676549s: enabled=[cloud-spanner nvidia-device-plugin ingress-dns helm-tiller storage-provisioner default-storageclass storage-provisioner-rancher metrics-server inspektor-gadget yakd volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0108 20:12:16.928498   18837 start.go:233] waiting for cluster config update ...
	I0108 20:12:16.928521   18837 start.go:242] writing updated cluster config ...
	I0108 20:12:16.928864   18837 ssh_runner.go:195] Run: rm -f paused
	I0108 20:12:16.981832   18837 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 20:12:16.984488   18837 out.go:177] * Done! kubectl is now configured to use "addons-793365" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 08 20:14:51 addons-793365 crio[948]: time="2024-01-08 20:14:51.296343219Z" level=info msg="Removing container: 98db48c8a63101541045733d3c4343d374bea1cc604b79b95ef79aac5766a671" id=e93dc0fb-5c79-4d74-90cc-94ab9d1a7e13 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 08 20:14:51 addons-793365 crio[948]: time="2024-01-08 20:14:51.323299202Z" level=info msg="Removed container 98db48c8a63101541045733d3c4343d374bea1cc604b79b95ef79aac5766a671: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=e93dc0fb-5c79-4d74-90cc-94ab9d1a7e13 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 08 20:14:51 addons-793365 crio[948]: time="2024-01-08 20:14:51.362554646Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7" id=cd7fb22e-4723-4da1-84e2-0b127656c716 name=/runtime.v1.ImageService/PullImage
	Jan 08 20:14:51 addons-793365 crio[948]: time="2024-01-08 20:14:51.363400041Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=ca00a924-2fb3-4380-bbe6-446e9d04d73b name=/runtime.v1.ImageService/ImageStatus
	Jan 08 20:14:51 addons-793365 crio[948]: time="2024-01-08 20:14:51.364784528Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=ca00a924-2fb3-4380-bbe6-446e9d04d73b name=/runtime.v1.ImageService/ImageStatus
	Jan 08 20:14:51 addons-793365 crio[948]: time="2024-01-08 20:14:51.365934210Z" level=info msg="Creating container: default/hello-world-app-5d77478584-wb955/hello-world-app" id=c93b81e5-8aca-4341-b02a-1bc481bba474 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 08 20:14:51 addons-793365 crio[948]: time="2024-01-08 20:14:51.366048244Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 08 20:14:51 addons-793365 crio[948]: time="2024-01-08 20:14:51.428351359Z" level=info msg="Created container 6027354c74eb39efa4c3d897a83c583b38b545a910a96e4d5dc103410642e9b6: default/hello-world-app-5d77478584-wb955/hello-world-app" id=c93b81e5-8aca-4341-b02a-1bc481bba474 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 08 20:14:51 addons-793365 crio[948]: time="2024-01-08 20:14:51.429224049Z" level=info msg="Starting container: 6027354c74eb39efa4c3d897a83c583b38b545a910a96e4d5dc103410642e9b6" id=2c22a8ea-6dbc-49ad-9a43-8377f52e36ca name=/runtime.v1.RuntimeService/StartContainer
	Jan 08 20:14:51 addons-793365 crio[948]: time="2024-01-08 20:14:51.437493623Z" level=info msg="Started container" PID=10471 containerID=6027354c74eb39efa4c3d897a83c583b38b545a910a96e4d5dc103410642e9b6 description=default/hello-world-app-5d77478584-wb955/hello-world-app id=2c22a8ea-6dbc-49ad-9a43-8377f52e36ca name=/runtime.v1.RuntimeService/StartContainer sandboxID=92d750e7c261717ffc8029217bbbf63a5a9c1bf4392faf117ff17824406e7367
	Jan 08 20:14:52 addons-793365 crio[948]: time="2024-01-08 20:14:52.927817781Z" level=info msg="Stopping container: c78c3bdd852abda3f5d38439e6935db1f06d944789e2823deb497010a2f0ca61 (timeout: 2s)" id=e536a241-df77-436f-a6eb-169657c87923 name=/runtime.v1.RuntimeService/StopContainer
	Jan 08 20:14:54 addons-793365 crio[948]: time="2024-01-08 20:14:54.934463211Z" level=warning msg="Stopping container c78c3bdd852abda3f5d38439e6935db1f06d944789e2823deb497010a2f0ca61 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=e536a241-df77-436f-a6eb-169657c87923 name=/runtime.v1.RuntimeService/StopContainer
	Jan 08 20:14:54 addons-793365 conmon[6489]: conmon c78c3bdd852abda3f5d3 <ninfo>: container 6501 exited with status 137
	Jan 08 20:14:55 addons-793365 crio[948]: time="2024-01-08 20:14:55.070285302Z" level=info msg="Stopped container c78c3bdd852abda3f5d38439e6935db1f06d944789e2823deb497010a2f0ca61: ingress-nginx/ingress-nginx-controller-69cff4fd79-ffm56/controller" id=e536a241-df77-436f-a6eb-169657c87923 name=/runtime.v1.RuntimeService/StopContainer
	Jan 08 20:14:55 addons-793365 crio[948]: time="2024-01-08 20:14:55.070894448Z" level=info msg="Stopping pod sandbox: e99ac38965c69c4f4444eccd5c1a046c49a4002061046af9adfddbdf9e077a32" id=36ffbd5f-6e47-49b7-bdf7-2fdb53af4f78 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 08 20:14:55 addons-793365 crio[948]: time="2024-01-08 20:14:55.074361377Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-W2OKBUUBKZF3T23E - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-COBTCF3IOXLMQWBW - [0:0]\n-X KUBE-HP-W2OKBUUBKZF3T23E\n-X KUBE-HP-COBTCF3IOXLMQWBW\nCOMMIT\n"
	Jan 08 20:14:55 addons-793365 crio[948]: time="2024-01-08 20:14:55.076179984Z" level=info msg="Closing host port tcp:80"
	Jan 08 20:14:55 addons-793365 crio[948]: time="2024-01-08 20:14:55.076241413Z" level=info msg="Closing host port tcp:443"
	Jan 08 20:14:55 addons-793365 crio[948]: time="2024-01-08 20:14:55.077777903Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jan 08 20:14:55 addons-793365 crio[948]: time="2024-01-08 20:14:55.077801423Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jan 08 20:14:55 addons-793365 crio[948]: time="2024-01-08 20:14:55.077968317Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-69cff4fd79-ffm56 Namespace:ingress-nginx ID:e99ac38965c69c4f4444eccd5c1a046c49a4002061046af9adfddbdf9e077a32 UID:6061692f-a8fe-4e87-9eec-fe9e3cccb35c NetNS:/var/run/netns/9d26b755-4a39-4cec-b9f6-77a1ef408a75 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 08 20:14:55 addons-793365 crio[948]: time="2024-01-08 20:14:55.078100202Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-69cff4fd79-ffm56 from CNI network \"kindnet\" (type=ptp)"
	Jan 08 20:14:55 addons-793365 crio[948]: time="2024-01-08 20:14:55.120919374Z" level=info msg="Stopped pod sandbox: e99ac38965c69c4f4444eccd5c1a046c49a4002061046af9adfddbdf9e077a32" id=36ffbd5f-6e47-49b7-bdf7-2fdb53af4f78 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 08 20:14:55 addons-793365 crio[948]: time="2024-01-08 20:14:55.312660440Z" level=info msg="Removing container: c78c3bdd852abda3f5d38439e6935db1f06d944789e2823deb497010a2f0ca61" id=7c6455e6-8fed-4917-8e9d-6e0122a5e1bb name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 08 20:14:55 addons-793365 crio[948]: time="2024-01-08 20:14:55.328188695Z" level=info msg="Removed container c78c3bdd852abda3f5d38439e6935db1f06d944789e2823deb497010a2f0ca61: ingress-nginx/ingress-nginx-controller-69cff4fd79-ffm56/controller" id=7c6455e6-8fed-4917-8e9d-6e0122a5e1bb name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6027354c74eb3       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago        Running             hello-world-app           0                   92d750e7c2617       hello-world-app-5d77478584-wb955
	53bf82af1208b       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        About a minute ago   Running             headlamp                  0                   4a49943d84c4d       headlamp-7ddfbb94ff-s294f
	25d540a0cb973       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                              2 minutes ago        Running             nginx                     0                   c8e0d4f987122       nginx
	112fb9e097f62       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago        Running             gcp-auth                  0                   488ba72485c1f       gcp-auth-d4c87556c-dl68m
	d923f35b88cf7       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                             3 minutes ago        Exited              patch                     2                   8beab53b7db23       ingress-nginx-admission-patch-ztqmz
	27c0eb51dc80f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago        Exited              create                    0                   d46fcedd7b220       ingress-nginx-admission-create-tq4dc
	fb49728f2a5cf       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago        Running             yakd                      0                   7c3f07ee9cd42       yakd-dashboard-9947fc6bf-p59sb
	f5fdb8fed8278       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago        Running             storage-provisioner       0                   ae7c9609c56d2       storage-provisioner
	634fe4b1d0efd       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago        Running             coredns                   0                   bcf7975891db8       coredns-5dd5756b68-6bvw6
	d0f7f49c20d28       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago        Running             kube-proxy                0                   5c62ea63f98f5       kube-proxy-qrmpl
	721e913055e08       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                             4 minutes ago        Running             kindnet-cni               0                   75ba0245d07fd       kindnet-4bvxl
	3bdace17e1725       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago        Running             kube-scheduler            0                   61e866cadacbd       kube-scheduler-addons-793365
	398cbc4f28a7a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago        Running             etcd                      0                   b6673997154cc       etcd-addons-793365
	6443d45663b4c       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago        Running             kube-controller-manager   0                   1392d72c1d4a3       kube-controller-manager-addons-793365
	ec40dbb2e1d20       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago        Running             kube-apiserver            0                   eab1e2fd4c565       kube-apiserver-addons-793365
	
	
	==> coredns [634fe4b1d0efd8e41ef6a4052c67f46af5c719c5012ebd150db5afd83efed121] <==
	[INFO] 10.244.0.18:59032 - 35918 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000131367s
	[INFO] 10.244.0.18:34177 - 54904 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004865671s
	[INFO] 10.244.0.18:34177 - 57975 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.00542574s
	[INFO] 10.244.0.18:55964 - 1556 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005157229s
	[INFO] 10.244.0.18:55964 - 46351 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005199026s
	[INFO] 10.244.0.18:52578 - 57131 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004378332s
	[INFO] 10.244.0.18:52578 - 32302 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004488384s
	[INFO] 10.244.0.18:47194 - 50369 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00008283s
	[INFO] 10.244.0.18:47194 - 54727 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000125022s
	[INFO] 10.244.0.20:40799 - 59480 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000227704s
	[INFO] 10.244.0.20:40041 - 42734 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000373727s
	[INFO] 10.244.0.20:36235 - 61736 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000151608s
	[INFO] 10.244.0.20:35447 - 31924 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000226208s
	[INFO] 10.244.0.20:57812 - 12470 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00014647s
	[INFO] 10.244.0.20:39292 - 51579 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0000962s
	[INFO] 10.244.0.20:39302 - 28301 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007286639s
	[INFO] 10.244.0.20:48700 - 20640 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007190759s
	[INFO] 10.244.0.20:54200 - 18962 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006984937s
	[INFO] 10.244.0.20:55156 - 57665 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007281393s
	[INFO] 10.244.0.20:39217 - 16898 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008223008s
	[INFO] 10.244.0.20:37053 - 39322 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008280826s
	[INFO] 10.244.0.20:36045 - 25896 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000838463s
	[INFO] 10.244.0.20:52593 - 45168 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00088796s
	[INFO] 10.244.0.25:47145 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000275068s
	[INFO] 10.244.0.25:58458 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000180234s
	
	
	==> describe nodes <==
	Name:               addons-793365
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-793365
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=addons-793365
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T20_10_32_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-793365
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 20:10:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-793365
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 20:14:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 20:13:35 +0000   Mon, 08 Jan 2024 20:10:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 20:13:35 +0000   Mon, 08 Jan 2024 20:10:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 20:13:35 +0000   Mon, 08 Jan 2024 20:10:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 20:13:35 +0000   Mon, 08 Jan 2024 20:11:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-793365
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 4419d83de88245fda3e2d6fc6e89212b
	  System UUID:                e6183a13-465a-41ef-928d-ccc64d0f00f9
	  Boot ID:                    0e88edaa-666a-4348-8c8d-059e8a9aec1e
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-wb955         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  gcp-auth                    gcp-auth-d4c87556c-dl68m                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  headlamp                    headlamp-7ddfbb94ff-s294f                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 coredns-5dd5756b68-6bvw6                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m15s
	  kube-system                 etcd-addons-793365                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m28s
	  kube-system                 kindnet-4bvxl                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m16s
	  kube-system                 kube-apiserver-addons-793365             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 kube-controller-manager-addons-793365    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 kube-proxy-qrmpl                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-scheduler-addons-793365             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-p59sb           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       256Mi (0%!)(MISSING)     4m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             348Mi (1%!)(MISSING)  476Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m10s  kube-proxy       
	  Normal  Starting                 4m29s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m29s  kubelet          Node addons-793365 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m29s  kubelet          Node addons-793365 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m29s  kubelet          Node addons-793365 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m17s  node-controller  Node addons-793365 event: Registered Node addons-793365 in Controller
	  Normal  NodeReady                3m41s  kubelet          Node addons-793365 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.016364] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.011290] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001498] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.001400] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.002127] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.002521] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001651] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.001519] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001418] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001549] platform eisa.0: Cannot allocate resource for EISA slot 8
	[Jan 8 19:18] kauditd_printk_skb: 36 callbacks suppressed
	[Jan 8 20:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 9e e3 b2 ab 8c 33 16 54 ac 77 f8 4c 08 00
	[  +1.005821] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 9e e3 b2 ab 8c 33 16 54 ac 77 f8 4c 08 00
	[  +2.019800] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 9e e3 b2 ab 8c 33 16 54 ac 77 f8 4c 08 00
	[  +4.027597] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 9e e3 b2 ab 8c 33 16 54 ac 77 f8 4c 08 00
	[  +8.191089] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 9e e3 b2 ab 8c 33 16 54 ac 77 f8 4c 08 00
	[Jan 8 20:13] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 9e e3 b2 ab 8c 33 16 54 ac 77 f8 4c 08 00
	[ +33.532590] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000039] ll header: 00000000: 9e e3 b2 ab 8c 33 16 54 ac 77 f8 4c 08 00
	
	
	==> etcd [398cbc4f28a7aa4c384b22353b71667c62316143a912fef1ef6b57e8bf5aa135] <==
	{"level":"warn","ts":"2024-01-08T20:10:49.704444Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.68475ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T20:10:49.705037Z","caller":"traceutil/trace.go:171","msg":"trace[62979520] range","detail":"{range_begin:/registry/clusterrolebindings/minikube-ingress-dns; range_end:; response_count:0; response_revision:423; }","duration":"102.284386ms","start":"2024-01-08T20:10:49.60274Z","end":"2024-01-08T20:10:49.705024Z","steps":["trace[62979520] 'agreement among raft nodes before linearized reading'  (duration: 101.664188ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T20:10:49.704487Z","caller":"traceutil/trace.go:171","msg":"trace[273938651] transaction","detail":"{read_only:false; response_revision:416; number_of_response:1; }","duration":"101.382127ms","start":"2024-01-08T20:10:49.603095Z","end":"2024-01-08T20:10:49.704478Z","steps":["trace[273938651] 'process raft request'  (duration: 99.726888ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T20:10:49.704515Z","caller":"traceutil/trace.go:171","msg":"trace[1738637980] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"101.380122ms","start":"2024-01-08T20:10:49.603128Z","end":"2024-01-08T20:10:49.704508Z","steps":["trace[1738637980] 'process raft request'  (duration: 99.731755ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T20:10:49.704568Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.802932ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T20:10:49.705385Z","caller":"traceutil/trace.go:171","msg":"trace[1046145283] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:0; response_revision:423; }","duration":"102.60797ms","start":"2024-01-08T20:10:49.602755Z","end":"2024-01-08T20:10:49.705363Z","steps":["trace[1046145283] 'agreement among raft nodes before linearized reading'  (duration: 101.789205ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T20:10:49.704671Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.850954ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/tiller-clusterrolebinding\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T20:10:49.705574Z","caller":"traceutil/trace.go:171","msg":"trace[1160723819] range","detail":"{range_begin:/registry/clusterrolebindings/tiller-clusterrolebinding; range_end:; response_count:0; response_revision:423; }","duration":"102.752087ms","start":"2024-01-08T20:10:49.602813Z","end":"2024-01-08T20:10:49.705565Z","steps":["trace[1160723819] 'agreement among raft nodes before linearized reading'  (duration: 101.836788ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T20:10:49.704697Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.619353ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T20:10:49.705737Z","caller":"traceutil/trace.go:171","msg":"trace[622677534] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:423; }","duration":"102.654311ms","start":"2024-01-08T20:10:49.603074Z","end":"2024-01-08T20:10:49.705728Z","steps":["trace[622677534] 'agreement among raft nodes before linearized reading'  (duration: 101.609096ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T20:10:50.001548Z","caller":"traceutil/trace.go:171","msg":"trace[1522411250] linearizableReadLoop","detail":"{readStateIndex:445; appliedIndex:443; }","duration":"107.516663ms","start":"2024-01-08T20:10:49.894009Z","end":"2024-01-08T20:10:50.001526Z","steps":["trace[1522411250] 'read index received'  (duration: 97.655512ms)","trace[1522411250] 'applied index is now lower than readState.Index'  (duration: 9.860344ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T20:10:50.001873Z","caller":"traceutil/trace.go:171","msg":"trace[2142591227] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"110.249949ms","start":"2024-01-08T20:10:49.891604Z","end":"2024-01-08T20:10:50.001854Z","steps":["trace[2142591227] 'process raft request'  (duration: 98.520484ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T20:10:50.002199Z","caller":"traceutil/trace.go:171","msg":"trace[2142873998] transaction","detail":"{read_only:false; response_revision:433; number_of_response:1; }","duration":"110.33291ms","start":"2024-01-08T20:10:49.89184Z","end":"2024-01-08T20:10:50.002173Z","steps":["trace[2142873998] 'process raft request'  (duration: 109.417708ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T20:10:50.00285Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.07724ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-01-08T20:10:50.002887Z","caller":"traceutil/trace.go:171","msg":"trace[1865576759] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:446; }","duration":"103.132233ms","start":"2024-01-08T20:10:49.899745Z","end":"2024-01-08T20:10:50.002877Z","steps":["trace[1865576759] 'agreement among raft nodes before linearized reading'  (duration: 102.971421ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T20:10:50.003903Z","caller":"traceutil/trace.go:171","msg":"trace[1371647469] transaction","detail":"{read_only:false; response_revision:434; number_of_response:1; }","duration":"108.026992ms","start":"2024-01-08T20:10:49.895864Z","end":"2024-01-08T20:10:50.003891Z","steps":["trace[1371647469] 'process raft request'  (duration: 105.450922ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T20:10:50.003995Z","caller":"traceutil/trace.go:171","msg":"trace[168577798] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"104.045085ms","start":"2024-01-08T20:10:49.899938Z","end":"2024-01-08T20:10:50.003983Z","steps":["trace[168577798] 'process raft request'  (duration: 101.412616ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T20:10:50.004114Z","caller":"traceutil/trace.go:171","msg":"trace[997023629] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"103.964366ms","start":"2024-01-08T20:10:49.900142Z","end":"2024-01-08T20:10:50.004107Z","steps":["trace[997023629] 'process raft request'  (duration: 101.257987ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T20:10:50.004342Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.349819ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-01-08T20:10:50.004375Z","caller":"traceutil/trace.go:171","msg":"trace[1387456573] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:446; }","duration":"110.390273ms","start":"2024-01-08T20:10:49.893977Z","end":"2024-01-08T20:10:50.004367Z","steps":["trace[1387456573] 'agreement among raft nodes before linearized reading'  (duration: 110.33001ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T20:10:50.210828Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.792997ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-01-08T20:10:50.28761Z","caller":"traceutil/trace.go:171","msg":"trace[405747385] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:458; }","duration":"181.561207ms","start":"2024-01-08T20:10:50.106015Z","end":"2024-01-08T20:10:50.287576Z","steps":["trace[405747385] 'agreement among raft nodes before linearized reading'  (duration: 104.766935ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T20:10:50.2112Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.987492ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T20:10:50.28805Z","caller":"traceutil/trace.go:171","msg":"trace[1715751177] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:0; response_revision:458; }","duration":"186.834157ms","start":"2024-01-08T20:10:50.101197Z","end":"2024-01-08T20:10:50.288032Z","steps":["trace[1715751177] 'agreement among raft nodes before linearized reading'  (duration: 109.969792ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T20:13:30.40652Z","caller":"traceutil/trace.go:171","msg":"trace[403577510] transaction","detail":"{read_only:false; response_revision:1765; number_of_response:1; }","duration":"147.632146ms","start":"2024-01-08T20:13:30.25886Z","end":"2024-01-08T20:13:30.406492Z","steps":["trace[403577510] 'process raft request'  (duration: 105.752416ms)","trace[403577510] 'compare'  (duration: 41.722113ms)"],"step_count":2}
	
	
	==> gcp-auth [112fb9e097f6284b6863db606486df2c3b106a41738a754eab0c1f77f113f071] <==
	2024/01/08 20:12:04 GCP Auth Webhook started!
	2024/01/08 20:12:17 Ready to marshal response ...
	2024/01/08 20:12:17 Ready to write response ...
	2024/01/08 20:12:17 Ready to marshal response ...
	2024/01/08 20:12:17 Ready to write response ...
	2024/01/08 20:12:23 Ready to marshal response ...
	2024/01/08 20:12:23 Ready to write response ...
	2024/01/08 20:12:27 Ready to marshal response ...
	2024/01/08 20:12:27 Ready to write response ...
	2024/01/08 20:12:27 Ready to marshal response ...
	2024/01/08 20:12:27 Ready to write response ...
	2024/01/08 20:12:29 Ready to marshal response ...
	2024/01/08 20:12:29 Ready to write response ...
	2024/01/08 20:12:54 Ready to marshal response ...
	2024/01/08 20:12:54 Ready to write response ...
	2024/01/08 20:13:05 Ready to marshal response ...
	2024/01/08 20:13:05 Ready to write response ...
	2024/01/08 20:13:05 Ready to marshal response ...
	2024/01/08 20:13:05 Ready to write response ...
	2024/01/08 20:13:05 Ready to marshal response ...
	2024/01/08 20:13:05 Ready to write response ...
	2024/01/08 20:13:20 Ready to marshal response ...
	2024/01/08 20:13:20 Ready to write response ...
	2024/01/08 20:14:49 Ready to marshal response ...
	2024/01/08 20:14:49 Ready to write response ...
	
	
	==> kernel <==
	 20:15:00 up 57 min,  0 users,  load average: 0.26, 0.58, 0.31
	Linux addons-793365 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [721e913055e08ffd2cca7e01a85166661c614ad18795d89fd59db3831086728b] <==
	I0108 20:12:59.555965       1 main.go:227] handling current node
	I0108 20:13:09.593284       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:13:09.593329       1 main.go:227] handling current node
	I0108 20:13:19.604732       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:13:19.604772       1 main.go:227] handling current node
	I0108 20:13:29.609254       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:13:29.609286       1 main.go:227] handling current node
	I0108 20:13:39.622805       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:13:39.622841       1 main.go:227] handling current node
	I0108 20:13:49.691080       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:13:49.691105       1 main.go:227] handling current node
	I0108 20:13:59.703472       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:13:59.703496       1 main.go:227] handling current node
	I0108 20:14:09.708228       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:14:09.708266       1 main.go:227] handling current node
	I0108 20:14:19.721662       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:14:19.721699       1 main.go:227] handling current node
	I0108 20:14:29.726991       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:14:29.727025       1 main.go:227] handling current node
	I0108 20:14:39.737496       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:14:39.737527       1 main.go:227] handling current node
	I0108 20:14:49.791262       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:14:49.791300       1 main.go:227] handling current node
	I0108 20:14:59.802565       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:14:59.802587       1 main.go:227] handling current node
	
	
	==> kube-apiserver [ec40dbb2e1d20208f369b5aa433ab66f1ea4f39e4c5e6fb27cf23cd146c9adda] <==
	I0108 20:12:41.892836       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0108 20:12:42.908975       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0108 20:12:43.963732       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0108 20:13:05.845362       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.94.134"}
	I0108 20:13:06.458472       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0108 20:13:35.604730       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:13:35.604800       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:13:35.615008       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:13:35.615090       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:13:35.622251       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:13:35.622463       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:13:35.623742       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:13:35.623851       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:13:35.632748       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:13:35.632812       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:13:35.638297       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:13:35.638355       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:13:35.649365       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:13:35.649422       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:13:35.688390       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:13:35.688495       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0108 20:13:36.624009       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0108 20:13:36.688920       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0108 20:13:36.703645       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0108 20:14:50.023918       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.24.196"}
	
	
	==> kube-controller-manager [6443d45663b4c00bd6ede8cab6e9f5b22048738c0e3b550630dc088cbba6f6ae] <==
	W0108 20:14:04.160163       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:14:04.160194       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 20:14:12.204114       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:14:12.204163       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 20:14:13.782775       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:14:13.782806       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 20:14:35.421885       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:14:35.421916       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0108 20:14:49.834814       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0108 20:14:49.846431       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-wb955"
	I0108 20:14:49.854119       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="19.391174ms"
	I0108 20:14:49.868432       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="14.230715ms"
	I0108 20:14:49.882065       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="13.549332ms"
	I0108 20:14:49.882185       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="72.472µs"
	W0108 20:14:50.787820       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:14:50.787870       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0108 20:14:51.911462       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0108 20:14:51.912087       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="11.83µs"
	I0108 20:14:51.917687       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0108 20:14:52.318643       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="7.230362ms"
	I0108 20:14:52.318832       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="110.452µs"
	W0108 20:14:53.003660       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:14:53.003692       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 20:14:53.685872       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:14:53.685908       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [d0f7f49c20d28246f311946b0485d727795502e4d6aa08f5338f499a8edc1b54] <==
	I0108 20:10:49.294807       1 server_others.go:69] "Using iptables proxy"
	I0108 20:10:49.593967       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0108 20:10:50.107875       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0108 20:10:50.191249       1 server_others.go:152] "Using iptables Proxier"
	I0108 20:10:50.191385       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0108 20:10:50.191419       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0108 20:10:50.191473       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 20:10:50.191768       1 server.go:846] "Version info" version="v1.28.4"
	I0108 20:10:50.192247       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 20:10:50.193407       1 config.go:188] "Starting service config controller"
	I0108 20:10:50.206676       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 20:10:50.193962       1 config.go:97] "Starting endpoint slice config controller"
	I0108 20:10:50.194335       1 config.go:315] "Starting node config controller"
	I0108 20:10:50.206827       1 shared_informer.go:318] Caches are synced for service config
	I0108 20:10:50.206870       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 20:10:50.207020       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 20:10:50.206910       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 20:10:50.207808       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [3bdace17e17259fd9aa05edbe26afe705ec93caa5f345d06c74962c7d989e62e] <==
	W0108 20:10:28.705346       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 20:10:28.705380       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 20:10:28.705417       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 20:10:28.705439       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 20:10:28.705444       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 20:10:28.705464       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 20:10:28.705480       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 20:10:28.705478       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 20:10:28.706065       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 20:10:28.706088       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 20:10:29.529882       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 20:10:29.529926       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 20:10:29.534333       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 20:10:29.534389       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 20:10:29.577937       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 20:10:29.577995       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 20:10:29.675984       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 20:10:29.676027       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 20:10:29.683352       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 20:10:29.683406       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 20:10:29.764811       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 20:10:29.764843       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 20:10:29.847113       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 20:10:29.847167       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0108 20:10:30.196032       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 08 20:14:50 addons-793365 kubelet[1547]: I0108 20:14:50.046315    1547 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spscl\" (UniqueName: \"kubernetes.io/projected/1bc4a041-2438-4571-abbf-5cf6f094a907-kube-api-access-spscl\") pod \"hello-world-app-5d77478584-wb955\" (UID: \"1bc4a041-2438-4571-abbf-5cf6f094a907\") " pod="default/hello-world-app-5d77478584-wb955"
	Jan 08 20:14:50 addons-793365 kubelet[1547]: W0108 20:14:50.508167    1547 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c121bc6424fe94323111d718fcbdd9396d67f2274847d4271bd42c984bcb0af2/crio-92d750e7c261717ffc8029217bbbf63a5a9c1bf4392faf117ff17824406e7367 WatchSource:0}: Error finding container 92d750e7c261717ffc8029217bbbf63a5a9c1bf4392faf117ff17824406e7367: Status 404 returned error can't find the container with id 92d750e7c261717ffc8029217bbbf63a5a9c1bf4392faf117ff17824406e7367
	Jan 08 20:14:51 addons-793365 kubelet[1547]: I0108 20:14:51.195065    1547 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfbk7\" (UniqueName: \"kubernetes.io/projected/d08b09ea-8398-44e8-b330-9bb9f48906b0-kube-api-access-wfbk7\") pod \"d08b09ea-8398-44e8-b330-9bb9f48906b0\" (UID: \"d08b09ea-8398-44e8-b330-9bb9f48906b0\") "
	Jan 08 20:14:51 addons-793365 kubelet[1547]: I0108 20:14:51.197582    1547 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d08b09ea-8398-44e8-b330-9bb9f48906b0-kube-api-access-wfbk7" (OuterVolumeSpecName: "kube-api-access-wfbk7") pod "d08b09ea-8398-44e8-b330-9bb9f48906b0" (UID: "d08b09ea-8398-44e8-b330-9bb9f48906b0"). InnerVolumeSpecName "kube-api-access-wfbk7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 08 20:14:51 addons-793365 kubelet[1547]: I0108 20:14:51.294460    1547 scope.go:117] "RemoveContainer" containerID="98db48c8a63101541045733d3c4343d374bea1cc604b79b95ef79aac5766a671"
	Jan 08 20:14:51 addons-793365 kubelet[1547]: I0108 20:14:51.295591    1547 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wfbk7\" (UniqueName: \"kubernetes.io/projected/d08b09ea-8398-44e8-b330-9bb9f48906b0-kube-api-access-wfbk7\") on node \"addons-793365\" DevicePath \"\""
	Jan 08 20:14:51 addons-793365 kubelet[1547]: I0108 20:14:51.323735    1547 scope.go:117] "RemoveContainer" containerID="98db48c8a63101541045733d3c4343d374bea1cc604b79b95ef79aac5766a671"
	Jan 08 20:14:51 addons-793365 kubelet[1547]: E0108 20:14:51.324289    1547 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98db48c8a63101541045733d3c4343d374bea1cc604b79b95ef79aac5766a671\": container with ID starting with 98db48c8a63101541045733d3c4343d374bea1cc604b79b95ef79aac5766a671 not found: ID does not exist" containerID="98db48c8a63101541045733d3c4343d374bea1cc604b79b95ef79aac5766a671"
	Jan 08 20:14:51 addons-793365 kubelet[1547]: I0108 20:14:51.324332    1547 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98db48c8a63101541045733d3c4343d374bea1cc604b79b95ef79aac5766a671"} err="failed to get container status \"98db48c8a63101541045733d3c4343d374bea1cc604b79b95ef79aac5766a671\": rpc error: code = NotFound desc = could not find container \"98db48c8a63101541045733d3c4343d374bea1cc604b79b95ef79aac5766a671\": container with ID starting with 98db48c8a63101541045733d3c4343d374bea1cc604b79b95ef79aac5766a671 not found: ID does not exist"
	Jan 08 20:14:51 addons-793365 kubelet[1547]: I0108 20:14:51.693441    1547 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d08b09ea-8398-44e8-b330-9bb9f48906b0" path="/var/lib/kubelet/pods/d08b09ea-8398-44e8-b330-9bb9f48906b0/volumes"
	Jan 08 20:14:52 addons-793365 kubelet[1547]: I0108 20:14:52.311486    1547 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-wb955" podStartSLOduration=2.460177995 podCreationTimestamp="2024-01-08 20:14:49 +0000 UTC" firstStartedPulling="2024-01-08 20:14:50.511616519 +0000 UTC m=+258.980901959" lastFinishedPulling="2024-01-08 20:14:51.362865839 +0000 UTC m=+259.832151297" observedRunningTime="2024-01-08 20:14:52.311032846 +0000 UTC m=+260.780318304" watchObservedRunningTime="2024-01-08 20:14:52.311427333 +0000 UTC m=+260.780712791"
	Jan 08 20:14:53 addons-793365 kubelet[1547]: I0108 20:14:53.692475    1547 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4e3994a6-63c7-4d5f-a9c9-218f191e7a4b" path="/var/lib/kubelet/pods/4e3994a6-63c7-4d5f-a9c9-218f191e7a4b/volumes"
	Jan 08 20:14:53 addons-793365 kubelet[1547]: I0108 20:14:53.692825    1547 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d1930f3b-ace1-4174-b375-cb3591efcb60" path="/var/lib/kubelet/pods/d1930f3b-ace1-4174-b375-cb3591efcb60/volumes"
	Jan 08 20:14:55 addons-793365 kubelet[1547]: I0108 20:14:55.311498    1547 scope.go:117] "RemoveContainer" containerID="c78c3bdd852abda3f5d38439e6935db1f06d944789e2823deb497010a2f0ca61"
	Jan 08 20:14:55 addons-793365 kubelet[1547]: I0108 20:14:55.325066    1547 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6061692f-a8fe-4e87-9eec-fe9e3cccb35c-webhook-cert\") pod \"6061692f-a8fe-4e87-9eec-fe9e3cccb35c\" (UID: \"6061692f-a8fe-4e87-9eec-fe9e3cccb35c\") "
	Jan 08 20:14:55 addons-793365 kubelet[1547]: I0108 20:14:55.325142    1547 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfdxm\" (UniqueName: \"kubernetes.io/projected/6061692f-a8fe-4e87-9eec-fe9e3cccb35c-kube-api-access-tfdxm\") pod \"6061692f-a8fe-4e87-9eec-fe9e3cccb35c\" (UID: \"6061692f-a8fe-4e87-9eec-fe9e3cccb35c\") "
	Jan 08 20:14:55 addons-793365 kubelet[1547]: I0108 20:14:55.326998    1547 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6061692f-a8fe-4e87-9eec-fe9e3cccb35c-kube-api-access-tfdxm" (OuterVolumeSpecName: "kube-api-access-tfdxm") pod "6061692f-a8fe-4e87-9eec-fe9e3cccb35c" (UID: "6061692f-a8fe-4e87-9eec-fe9e3cccb35c"). InnerVolumeSpecName "kube-api-access-tfdxm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 08 20:14:55 addons-793365 kubelet[1547]: I0108 20:14:55.327008    1547 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6061692f-a8fe-4e87-9eec-fe9e3cccb35c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "6061692f-a8fe-4e87-9eec-fe9e3cccb35c" (UID: "6061692f-a8fe-4e87-9eec-fe9e3cccb35c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 20:14:55 addons-793365 kubelet[1547]: I0108 20:14:55.328452    1547 scope.go:117] "RemoveContainer" containerID="c78c3bdd852abda3f5d38439e6935db1f06d944789e2823deb497010a2f0ca61"
	Jan 08 20:14:55 addons-793365 kubelet[1547]: E0108 20:14:55.328851    1547 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c78c3bdd852abda3f5d38439e6935db1f06d944789e2823deb497010a2f0ca61\": container with ID starting with c78c3bdd852abda3f5d38439e6935db1f06d944789e2823deb497010a2f0ca61 not found: ID does not exist" containerID="c78c3bdd852abda3f5d38439e6935db1f06d944789e2823deb497010a2f0ca61"
	Jan 08 20:14:55 addons-793365 kubelet[1547]: I0108 20:14:55.328890    1547 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c78c3bdd852abda3f5d38439e6935db1f06d944789e2823deb497010a2f0ca61"} err="failed to get container status \"c78c3bdd852abda3f5d38439e6935db1f06d944789e2823deb497010a2f0ca61\": rpc error: code = NotFound desc = could not find container \"c78c3bdd852abda3f5d38439e6935db1f06d944789e2823deb497010a2f0ca61\": container with ID starting with c78c3bdd852abda3f5d38439e6935db1f06d944789e2823deb497010a2f0ca61 not found: ID does not exist"
	Jan 08 20:14:55 addons-793365 kubelet[1547]: I0108 20:14:55.426317    1547 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tfdxm\" (UniqueName: \"kubernetes.io/projected/6061692f-a8fe-4e87-9eec-fe9e3cccb35c-kube-api-access-tfdxm\") on node \"addons-793365\" DevicePath \"\""
	Jan 08 20:14:55 addons-793365 kubelet[1547]: I0108 20:14:55.426375    1547 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6061692f-a8fe-4e87-9eec-fe9e3cccb35c-webhook-cert\") on node \"addons-793365\" DevicePath \"\""
	Jan 08 20:14:55 addons-793365 kubelet[1547]: I0108 20:14:55.693019    1547 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6061692f-a8fe-4e87-9eec-fe9e3cccb35c" path="/var/lib/kubelet/pods/6061692f-a8fe-4e87-9eec-fe9e3cccb35c/volumes"
	Jan 08 20:14:58 addons-793365 kubelet[1547]: E0108 20:14:58.825986    1547 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/686584a378549fbfab16e00822a344681c9998840bfed4252e375b2582dc2763/diff" to get inode usage: stat /var/lib/containers/storage/overlay/686584a378549fbfab16e00822a344681c9998840bfed4252e375b2582dc2763/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/ingress-nginx_ingress-nginx-controller-69cff4fd79-ffm56_6061692f-a8fe-4e87-9eec-fe9e3cccb35c/controller/0.log" to get inode usage: stat /var/log/pods/ingress-nginx_ingress-nginx-controller-69cff4fd79-ffm56_6061692f-a8fe-4e87-9eec-fe9e3cccb35c/controller/0.log: no such file or directory
	
	
	==> storage-provisioner [f5fdb8fed8278e674bf9a80ee7af781455b2ad22e0fbc1cf136da17798978348] <==
	I0108 20:11:21.991318       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 20:11:22.198739       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 20:11:22.199176       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 20:11:22.288505       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 20:11:22.289207       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-793365_e5a4bdeb-1d9f-420a-8f27-0ee2a922b054!
	I0108 20:11:22.289124       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fe7df994-ad9c-42c4-903a-ba692c2306b1", APIVersion:"v1", ResourceVersion:"922", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-793365_e5a4bdeb-1d9f-420a-8f27-0ee2a922b054 became leader
	I0108 20:11:22.389608       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-793365_e5a4bdeb-1d9f-420a-8f27-0ee2a922b054!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-793365 -n addons-793365
helpers_test.go:261: (dbg) Run:  kubectl --context addons-793365 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (151.92s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (183.09s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-592184 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-592184 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.889641497s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-592184 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-592184 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b5f128ea-b590-44c2-8cd1-3fd836e7c5ce] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b5f128ea-b590-44c2-8cd1-3fd836e7c5ce] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.00357186s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-592184 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0108 20:22:17.008689   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
E0108 20:22:44.695105   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-592184 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.909323342s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-592184 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-592184 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E0108 20:23:13.023106   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/functional-563235/client.crt: no such file or directory
E0108 20:23:13.028491   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/functional-563235/client.crt: no such file or directory
E0108 20:23:13.038798   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/functional-563235/client.crt: no such file or directory
E0108 20:23:13.059281   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/functional-563235/client.crt: no such file or directory
E0108 20:23:13.099717   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/functional-563235/client.crt: no such file or directory
E0108 20:23:13.180160   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/functional-563235/client.crt: no such file or directory
E0108 20:23:13.340614   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/functional-563235/client.crt: no such file or directory
E0108 20:23:13.661314   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/functional-563235/client.crt: no such file or directory
E0108 20:23:14.302381   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/functional-563235/client.crt: no such file or directory
E0108 20:23:15.583021   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/functional-563235/client.crt: no such file or directory
E0108 20:23:18.143575   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/functional-563235/client.crt: no such file or directory
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.010713379s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-592184 addons disable ingress-dns --alsologtostderr -v=1
E0108 20:23:23.264122   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/functional-563235/client.crt: no such file or directory
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-592184 addons disable ingress-dns --alsologtostderr -v=1: (2.705856385s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-592184 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-592184 addons disable ingress --alsologtostderr -v=1: (7.495947146s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-592184
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-592184:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "224c03443594133028ec15a6449fc15c7f3fe3d0044d3860d8dd06e14a665f7d",
	        "Created": "2024-01-08T20:19:17.825998636Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 58780,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T20:19:18.134584619Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b531f61d9bfacb74e25e3fb20a2533b78fa4bf98ca9061755006074ff8c2c789",
	        "ResolvConfPath": "/var/lib/docker/containers/224c03443594133028ec15a6449fc15c7f3fe3d0044d3860d8dd06e14a665f7d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/224c03443594133028ec15a6449fc15c7f3fe3d0044d3860d8dd06e14a665f7d/hostname",
	        "HostsPath": "/var/lib/docker/containers/224c03443594133028ec15a6449fc15c7f3fe3d0044d3860d8dd06e14a665f7d/hosts",
	        "LogPath": "/var/lib/docker/containers/224c03443594133028ec15a6449fc15c7f3fe3d0044d3860d8dd06e14a665f7d/224c03443594133028ec15a6449fc15c7f3fe3d0044d3860d8dd06e14a665f7d-json.log",
	        "Name": "/ingress-addon-legacy-592184",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-592184:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-592184",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2a50940181f09fc1a5442c9e24141a079b97a274b88bad48c368d02ef4b78226-init/diff:/var/lib/docker/overlay2/2fffc6399525ec20cf4113360863206b9b39bff791b2620dc189d266ef6bfe67/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2a50940181f09fc1a5442c9e24141a079b97a274b88bad48c368d02ef4b78226/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2a50940181f09fc1a5442c9e24141a079b97a274b88bad48c368d02ef4b78226/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2a50940181f09fc1a5442c9e24141a079b97a274b88bad48c368d02ef4b78226/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-592184",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-592184/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-592184",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-592184",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-592184",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a2e0fff31a5e3c95f3eefb065b9abec4ec11a7c568738cbfb42e78f1716f21a1",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a2e0fff31a5e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-592184": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "224c03443594",
	                        "ingress-addon-legacy-592184"
	                    ],
	                    "NetworkID": "e678672c1b343d3440ae4b1fba0fed85ac3e5550cd8f9f714b1bebbb26c3de25",
	                    "EndpointID": "6497bbbacc414f9b9a1ae1e12663806f50d36e2b5fcac2c7980f449b93e8be7e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-592184 -n ingress-addon-legacy-592184
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-592184 logs -n 25
E0108 20:23:33.504554   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/functional-563235/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-592184 logs -n 25: (1.216818192s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                  Args                                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh     | functional-563235 ssh stat                                              | functional-563235           | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	|         | /mount-9p/created-by-pod                                                |                             |         |         |                     |                     |
	| ssh     | functional-563235 ssh sudo                                              | functional-563235           | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	|         | umount -f /mount-9p                                                     |                             |         |         |                     |                     |
	| mount   | -p functional-563235                                                    | functional-563235           | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdspecific-port609153744/001:/mount-9p |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                     |                             |         |         |                     |                     |
	| ssh     | functional-563235 ssh findmnt                                           | functional-563235           | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC |                     |
	|         | -T /mount-9p | grep 9p                                                  |                             |         |         |                     |                     |
	| ssh     | functional-563235 ssh findmnt                                           | functional-563235           | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	|         | -T /mount-9p | grep 9p                                                  |                             |         |         |                     |                     |
	| ssh     | functional-563235 ssh -- ls                                             | functional-563235           | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	|         | -la /mount-9p                                                           |                             |         |         |                     |                     |
	| ssh     | functional-563235 ssh sudo                                              | functional-563235           | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC |                     |
	|         | umount -f /mount-9p                                                     |                             |         |         |                     |                     |
	| mount   | -p functional-563235                                                    | functional-563235           | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup721812902/001:/mount2   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                  |                             |         |         |                     |                     |
	| mount   | -p functional-563235                                                    | functional-563235           | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup721812902/001:/mount3   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                  |                             |         |         |                     |                     |
	| ssh     | functional-563235 ssh findmnt                                           | functional-563235           | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC |                     |
	|         | -T /mount1                                                              |                             |         |         |                     |                     |
	| mount   | -p functional-563235                                                    | functional-563235           | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup721812902/001:/mount1   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                  |                             |         |         |                     |                     |
	| image   | functional-563235 image ls                                              | functional-563235           | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	| start   | -p functional-563235                                                    | functional-563235           | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC |                     |
	|         | --dry-run --memory                                                      |                             |         |         |                     |                     |
	|         | 250MB --alsologtostderr                                                 |                             |         |         |                     |                     |
	|         | --driver=docker                                                         |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                |                             |         |         |                     |                     |
	| image   | functional-563235 image build -t                                        | functional-563235           | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	|         | localhost/my-image:functional-563235                                    |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-563235                                                       | functional-563235           | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	|         | image ls --format json                                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                       |                             |         |         |                     |                     |
	| image   | functional-563235                                                       | functional-563235           | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	|         | image ls --format table                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                       |                             |         |         |                     |                     |
	| image   | functional-563235 image ls                                              | functional-563235           | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	| delete  | -p functional-563235                                                    | functional-563235           | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:19 UTC |
	| start   | -p ingress-addon-legacy-592184                                          | ingress-addon-legacy-592184 | jenkins | v1.32.0 | 08 Jan 24 20:19 UTC | 08 Jan 24 20:20 UTC |
	|         | --kubernetes-version=v1.18.20                                           |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                       |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                    |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-592184                                             | ingress-addon-legacy-592184 | jenkins | v1.32.0 | 08 Jan 24 20:20 UTC | 08 Jan 24 20:20 UTC |
	|         | addons enable ingress                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                  |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-592184                                             | ingress-addon-legacy-592184 | jenkins | v1.32.0 | 08 Jan 24 20:20 UTC | 08 Jan 24 20:20 UTC |
	|         | addons enable ingress-dns                                               |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                  |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-592184                                             | ingress-addon-legacy-592184 | jenkins | v1.32.0 | 08 Jan 24 20:20 UTC |                     |
	|         | ssh curl -s http://127.0.0.1/                                           |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                            |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-592184 ip                                          | ingress-addon-legacy-592184 | jenkins | v1.32.0 | 08 Jan 24 20:23 UTC | 08 Jan 24 20:23 UTC |
	| addons  | ingress-addon-legacy-592184                                             | ingress-addon-legacy-592184 | jenkins | v1.32.0 | 08 Jan 24 20:23 UTC | 08 Jan 24 20:23 UTC |
	|         | addons disable ingress-dns                                              |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                  |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-592184                                             | ingress-addon-legacy-592184 | jenkins | v1.32.0 | 08 Jan 24 20:23 UTC | 08 Jan 24 20:23 UTC |
	|         | addons disable ingress                                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                  |                             |         |         |                     |                     |
	|---------|-------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:19:02
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:19:02.719878   58158 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:19:02.720221   58158 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:19:02.720234   58158 out.go:309] Setting ErrFile to fd 2...
	I0108 20:19:02.720241   58158 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:19:02.720546   58158 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-11003/.minikube/bin
	I0108 20:19:02.721322   58158 out.go:303] Setting JSON to false
	I0108 20:19:02.722853   58158 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3669,"bootTime":1704741474,"procs":495,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:19:02.722958   58158 start.go:138] virtualization: kvm guest
	I0108 20:19:02.725699   58158 out.go:177] * [ingress-addon-legacy-592184] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:19:02.727649   58158 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:19:02.727678   58158 notify.go:220] Checking for updates...
	I0108 20:19:02.729806   58158 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:19:02.731672   58158 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-11003/kubeconfig
	I0108 20:19:02.733310   58158 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-11003/.minikube
	I0108 20:19:02.735009   58158 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 20:19:02.736612   58158 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:19:02.738500   58158 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:19:02.766202   58158 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:19:02.766324   58158 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:19:02.826537   58158 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:37 SystemTime:2024-01-08 20:19:02.817407364 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:19:02.826641   58158 docker.go:295] overlay module found
	I0108 20:19:02.828788   58158 out.go:177] * Using the docker driver based on user configuration
	I0108 20:19:02.830184   58158 start.go:298] selected driver: docker
	I0108 20:19:02.830196   58158 start.go:902] validating driver "docker" against <nil>
	I0108 20:19:02.830206   58158 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:19:02.831084   58158 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:19:02.890009   58158 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:37 SystemTime:2024-01-08 20:19:02.880071072 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:19:02.890278   58158 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 20:19:02.890546   58158 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 20:19:02.892848   58158 out.go:177] * Using Docker driver with root privileges
	I0108 20:19:02.894607   58158 cni.go:84] Creating CNI manager for ""
	I0108 20:19:02.894629   58158 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 20:19:02.894640   58158 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 20:19:02.894651   58158 start_flags.go:323] config:
	{Name:ingress-addon-legacy-592184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-592184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:19:02.896548   58158 out.go:177] * Starting control plane node ingress-addon-legacy-592184 in cluster ingress-addon-legacy-592184
	I0108 20:19:02.898078   58158 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 20:19:02.899718   58158 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:19:02.901324   58158 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 20:19:02.901458   58158 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0108 20:19:02.921156   58158 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0108 20:19:02.921187   58158 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0108 20:19:02.941584   58158 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0108 20:19:02.941626   58158 cache.go:56] Caching tarball of preloaded images
	I0108 20:19:02.941838   58158 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 20:19:02.944086   58158 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0108 20:19:02.945774   58158 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0108 20:19:02.982601   58158 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17907-11003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0108 20:19:09.169226   58158 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0108 20:19:09.169383   58158 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17907-11003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0108 20:19:10.216497   58158 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0108 20:19:10.216916   58158 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/config.json ...
	I0108 20:19:10.216961   58158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/config.json: {Name:mk4554f5c1cef10248e97dacb50a872bea00da6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:19:10.217183   58158 cache.go:194] Successfully downloaded all kic artifacts
	I0108 20:19:10.217224   58158 start.go:365] acquiring machines lock for ingress-addon-legacy-592184: {Name:mkf3f6c6fc773cf81d3fb1de3e2caa89b8d017b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:19:10.217300   58158 start.go:369] acquired machines lock for "ingress-addon-legacy-592184" in 53.602µs
	I0108 20:19:10.217326   58158 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-592184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-592184 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 20:19:10.217478   58158 start.go:125] createHost starting for "" (driver="docker")
	I0108 20:19:10.220610   58158 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0108 20:19:10.220966   58158 start.go:159] libmachine.API.Create for "ingress-addon-legacy-592184" (driver="docker")
	I0108 20:19:10.221017   58158 client.go:168] LocalClient.Create starting
	I0108 20:19:10.221133   58158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem
	I0108 20:19:10.221194   58158 main.go:141] libmachine: Decoding PEM data...
	I0108 20:19:10.221217   58158 main.go:141] libmachine: Parsing certificate...
	I0108 20:19:10.221287   58158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17907-11003/.minikube/certs/cert.pem
	I0108 20:19:10.221313   58158 main.go:141] libmachine: Decoding PEM data...
	I0108 20:19:10.221333   58158 main.go:141] libmachine: Parsing certificate...
	I0108 20:19:10.221734   58158 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-592184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 20:19:10.240091   58158 cli_runner.go:211] docker network inspect ingress-addon-legacy-592184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 20:19:10.240214   58158 network_create.go:281] running [docker network inspect ingress-addon-legacy-592184] to gather additional debugging logs...
	I0108 20:19:10.240242   58158 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-592184
	W0108 20:19:10.260378   58158 cli_runner.go:211] docker network inspect ingress-addon-legacy-592184 returned with exit code 1
	I0108 20:19:10.260446   58158 network_create.go:284] error running [docker network inspect ingress-addon-legacy-592184]: docker network inspect ingress-addon-legacy-592184: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-592184 not found
	I0108 20:19:10.260473   58158 network_create.go:286] output of [docker network inspect ingress-addon-legacy-592184]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-592184 not found
	
	** /stderr **
	I0108 20:19:10.260659   58158 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:19:10.279016   58158 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002297670}
	I0108 20:19:10.279078   58158 network_create.go:124] attempt to create docker network ingress-addon-legacy-592184 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0108 20:19:10.279142   58158 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-592184 ingress-addon-legacy-592184
	I0108 20:19:10.337410   58158 network_create.go:108] docker network ingress-addon-legacy-592184 192.168.49.0/24 created
	I0108 20:19:10.337469   58158 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-592184" container
	I0108 20:19:10.337554   58158 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 20:19:10.356075   58158 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-592184 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-592184 --label created_by.minikube.sigs.k8s.io=true
	I0108 20:19:10.377065   58158 oci.go:103] Successfully created a docker volume ingress-addon-legacy-592184
	I0108 20:19:10.377164   58158 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-592184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-592184 --entrypoint /usr/bin/test -v ingress-addon-legacy-592184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I0108 20:19:12.165672   58158 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-592184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-592184 --entrypoint /usr/bin/test -v ingress-addon-legacy-592184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib: (1.788451345s)
	I0108 20:19:12.165713   58158 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-592184
	I0108 20:19:12.165737   58158 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 20:19:12.165762   58158 kic.go:194] Starting extracting preloaded images to volume ...
	I0108 20:19:12.165838   58158 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17907-11003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-592184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 20:19:17.751523   58158 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17907-11003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-592184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (5.585615634s)
	I0108 20:19:17.751578   58158 kic.go:203] duration metric: took 5.585813 seconds to extract preloaded images to volume
	W0108 20:19:17.751777   58158 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 20:19:17.751922   58158 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 20:19:17.807765   58158 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-592184 --name ingress-addon-legacy-592184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-592184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-592184 --network ingress-addon-legacy-592184 --ip 192.168.49.2 --volume ingress-addon-legacy-592184:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0108 20:19:18.144056   58158 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-592184 --format={{.State.Running}}
	I0108 20:19:18.165367   58158 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-592184 --format={{.State.Status}}
	I0108 20:19:18.186330   58158 cli_runner.go:164] Run: docker exec ingress-addon-legacy-592184 stat /var/lib/dpkg/alternatives/iptables
	I0108 20:19:18.259171   58158 oci.go:144] the created container "ingress-addon-legacy-592184" has a running status.
	I0108 20:19:18.259228   58158 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17907-11003/.minikube/machines/ingress-addon-legacy-592184/id_rsa...
	I0108 20:19:18.361261   58158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/machines/ingress-addon-legacy-592184/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0108 20:19:18.361325   58158 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17907-11003/.minikube/machines/ingress-addon-legacy-592184/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 20:19:18.384032   58158 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-592184 --format={{.State.Status}}
	I0108 20:19:18.404136   58158 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 20:19:18.404158   58158 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-592184 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 20:19:18.481374   58158 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-592184 --format={{.State.Status}}
	I0108 20:19:18.502380   58158 machine.go:88] provisioning docker machine ...
	I0108 20:19:18.502433   58158 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-592184"
	I0108 20:19:18.502506   58158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-592184
	I0108 20:19:18.525721   58158 main.go:141] libmachine: Using SSH client type: native
	I0108 20:19:18.526074   58158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0108 20:19:18.526091   58158 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-592184 && echo "ingress-addon-legacy-592184" | sudo tee /etc/hostname
	I0108 20:19:18.526815   58158 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33468->127.0.0.1:32787: read: connection reset by peer
	I0108 20:19:21.668090   58158 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-592184
	
	I0108 20:19:21.668191   58158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-592184
	I0108 20:19:21.687957   58158 main.go:141] libmachine: Using SSH client type: native
	I0108 20:19:21.688395   58158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0108 20:19:21.688429   58158 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-592184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-592184/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-592184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:19:21.812333   58158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:19:21.812361   58158 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17907-11003/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-11003/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-11003/.minikube}
	I0108 20:19:21.812380   58158 ubuntu.go:177] setting up certificates
	I0108 20:19:21.812392   58158 provision.go:83] configureAuth start
	I0108 20:19:21.812441   58158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-592184
	I0108 20:19:21.829386   58158 provision.go:138] copyHostCerts
	I0108 20:19:21.829421   58158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17907-11003/.minikube/cert.pem
	I0108 20:19:21.829448   58158 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-11003/.minikube/cert.pem, removing ...
	I0108 20:19:21.829453   58158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-11003/.minikube/cert.pem
	I0108 20:19:21.829514   58158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-11003/.minikube/cert.pem (1123 bytes)
	I0108 20:19:21.829581   58158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17907-11003/.minikube/key.pem
	I0108 20:19:21.829598   58158 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-11003/.minikube/key.pem, removing ...
	I0108 20:19:21.829602   58158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-11003/.minikube/key.pem
	I0108 20:19:21.829624   58158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-11003/.minikube/key.pem (1679 bytes)
	I0108 20:19:21.829720   58158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17907-11003/.minikube/ca.pem
	I0108 20:19:21.829743   58158 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-11003/.minikube/ca.pem, removing ...
	I0108 20:19:21.829747   58158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-11003/.minikube/ca.pem
	I0108 20:19:21.829779   58158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-11003/.minikube/ca.pem (1078 bytes)
	I0108 20:19:21.829841   58158 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-11003/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-592184 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-592184]
	I0108 20:19:22.052045   58158 provision.go:172] copyRemoteCerts
	I0108 20:19:22.052117   58158 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:19:22.052156   58158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-592184
	I0108 20:19:22.072079   58158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/ingress-addon-legacy-592184/id_rsa Username:docker}
	I0108 20:19:22.169096   58158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 20:19:22.169177   58158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 20:19:22.193114   58158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 20:19:22.193205   58158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0108 20:19:22.217088   58158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 20:19:22.217150   58158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 20:19:22.243277   58158 provision.go:86] duration metric: configureAuth took 430.864072ms
	I0108 20:19:22.243322   58158 ubuntu.go:193] setting minikube options for container-runtime
	I0108 20:19:22.243621   58158 config.go:182] Loaded profile config "ingress-addon-legacy-592184": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0108 20:19:22.243805   58158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-592184
	I0108 20:19:22.263127   58158 main.go:141] libmachine: Using SSH client type: native
	I0108 20:19:22.263488   58158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0108 20:19:22.263506   58158 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 20:19:22.504651   58158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 20:19:22.504681   58158 machine.go:91] provisioned docker machine in 4.002273479s
	I0108 20:19:22.504690   58158 client.go:171] LocalClient.Create took 12.283654331s
	I0108 20:19:22.504706   58158 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-592184" took 12.283744952s
	I0108 20:19:22.504714   58158 start.go:300] post-start starting for "ingress-addon-legacy-592184" (driver="docker")
	I0108 20:19:22.504725   58158 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:19:22.504783   58158 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:19:22.504817   58158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-592184
	I0108 20:19:22.525467   58158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/ingress-addon-legacy-592184/id_rsa Username:docker}
	I0108 20:19:22.617709   58158 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:19:22.621244   58158 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 20:19:22.621280   58158 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 20:19:22.621288   58158 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 20:19:22.621294   58158 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 20:19:22.621305   58158 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-11003/.minikube/addons for local assets ...
	I0108 20:19:22.621356   58158 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-11003/.minikube/files for local assets ...
	I0108 20:19:22.621431   58158 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/ssl/certs/177612.pem -> 177612.pem in /etc/ssl/certs
	I0108 20:19:22.621445   58158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/ssl/certs/177612.pem -> /etc/ssl/certs/177612.pem
	I0108 20:19:22.621529   58158 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 20:19:22.630961   58158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/ssl/certs/177612.pem --> /etc/ssl/certs/177612.pem (1708 bytes)
	I0108 20:19:22.656892   58158 start.go:303] post-start completed in 152.159492ms
	I0108 20:19:22.657339   58158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-592184
	I0108 20:19:22.674626   58158 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/config.json ...
	I0108 20:19:22.674939   58158 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:19:22.674984   58158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-592184
	I0108 20:19:22.693768   58158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/ingress-addon-legacy-592184/id_rsa Username:docker}
	I0108 20:19:22.780072   58158 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 20:19:22.784444   58158 start.go:128] duration metric: createHost completed in 12.566944129s
	I0108 20:19:22.784482   58158 start.go:83] releasing machines lock for "ingress-addon-legacy-592184", held for 12.567166688s
	I0108 20:19:22.784581   58158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-592184
	I0108 20:19:22.803520   58158 ssh_runner.go:195] Run: cat /version.json
	I0108 20:19:22.803582   58158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-592184
	I0108 20:19:22.803612   58158 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:19:22.803679   58158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-592184
	I0108 20:19:22.823264   58158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/ingress-addon-legacy-592184/id_rsa Username:docker}
	I0108 20:19:22.823382   58158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/ingress-addon-legacy-592184/id_rsa Username:docker}
	I0108 20:19:22.999501   58158 ssh_runner.go:195] Run: systemctl --version
	I0108 20:19:23.005002   58158 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 20:19:23.145967   58158 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 20:19:23.150103   58158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:19:23.168865   58158 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 20:19:23.168953   58158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:19:23.201422   58158 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0108 20:19:23.201448   58158 start.go:475] detecting cgroup driver to use...
	I0108 20:19:23.201485   58158 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 20:19:23.201530   58158 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 20:19:23.215911   58158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 20:19:23.226729   58158 docker.go:217] disabling cri-docker service (if available) ...
	I0108 20:19:23.226814   58158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 20:19:23.241182   58158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 20:19:23.256863   58158 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 20:19:23.338893   58158 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 20:19:23.428403   58158 docker.go:233] disabling docker service ...
	I0108 20:19:23.428464   58158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:19:23.447948   58158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:19:23.460413   58158 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:19:23.540492   58158 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:19:23.631525   58158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:19:23.644728   58158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:19:23.662112   58158 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0108 20:19:23.662163   58158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:19:23.672009   58158 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 20:19:23.672067   58158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:19:23.681271   58158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:19:23.690956   58158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:19:23.700838   58158 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 20:19:23.711279   58158 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 20:19:23.720502   58158 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 20:19:23.728935   58158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:19:23.812364   58158 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 20:19:23.948108   58158 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 20:19:23.948197   58158 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 20:19:23.952353   58158 start.go:543] Will wait 60s for crictl version
	I0108 20:19:23.952434   58158 ssh_runner.go:195] Run: which crictl
	I0108 20:19:23.955868   58158 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 20:19:23.990261   58158 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0108 20:19:23.990393   58158 ssh_runner.go:195] Run: crio --version
	I0108 20:19:24.027098   58158 ssh_runner.go:195] Run: crio --version
	I0108 20:19:24.068881   58158 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0108 20:19:24.070634   58158 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-592184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:19:24.089707   58158 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0108 20:19:24.094443   58158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:19:24.107618   58158 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 20:19:24.107688   58158 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:19:24.155756   58158 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0108 20:19:24.155836   58158 ssh_runner.go:195] Run: which lz4
	I0108 20:19:24.159283   58158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0108 20:19:24.159404   58158 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 20:19:24.162619   58158 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 20:19:24.162646   58158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0108 20:19:25.214965   58158 crio.go:444] Took 1.055607 seconds to copy over tarball
	I0108 20:19:25.215078   58158 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 20:19:27.742510   58158 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.527406284s)
	I0108 20:19:27.742549   58158 crio.go:451] Took 2.527547 seconds to extract the tarball
	I0108 20:19:27.742558   58158 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 20:19:27.816484   58158 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:19:27.853326   58158 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0108 20:19:27.853365   58158 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 20:19:27.853471   58158 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 20:19:27.853502   58158 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 20:19:27.853509   58158 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0108 20:19:27.853509   58158 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:19:27.853546   58158 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 20:19:27.853589   58158 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0108 20:19:27.853477   58158 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 20:19:27.853675   58158 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0108 20:19:27.854869   58158 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 20:19:27.854877   58158 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0108 20:19:27.854880   58158 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 20:19:27.854876   58158 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:19:27.854928   58158 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 20:19:27.854876   58158 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0108 20:19:27.854877   58158 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0108 20:19:27.854882   58158 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 20:19:28.049898   58158 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:19:28.056745   58158 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0108 20:19:28.060163   58158 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0108 20:19:28.093172   58158 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0108 20:19:28.094594   58158 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0108 20:19:28.101649   58158 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0108 20:19:28.131869   58158 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0108 20:19:28.191048   58158 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0108 20:19:28.191100   58158 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 20:19:28.191138   58158 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0108 20:19:28.191156   58158 ssh_runner.go:195] Run: which crictl
	I0108 20:19:28.191174   58158 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0108 20:19:28.191185   58158 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0108 20:19:28.191215   58158 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0108 20:19:28.191222   58158 ssh_runner.go:195] Run: which crictl
	I0108 20:19:28.191231   58158 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0108 20:19:28.191250   58158 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0108 20:19:28.191254   58158 ssh_runner.go:195] Run: which crictl
	I0108 20:19:28.191277   58158 ssh_runner.go:195] Run: which crictl
	I0108 20:19:28.192236   58158 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 20:19:28.198578   58158 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0108 20:19:28.198626   58158 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 20:19:28.198665   58158 ssh_runner.go:195] Run: which crictl
	I0108 20:19:28.229705   58158 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0108 20:19:28.229757   58158 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 20:19:28.229770   58158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0108 20:19:28.229792   58158 ssh_runner.go:195] Run: which crictl
	I0108 20:19:28.229920   58158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0108 20:19:28.229983   58158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0108 20:19:28.230005   58158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0108 20:19:28.292244   58158 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0108 20:19:28.292311   58158 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 20:19:28.292323   58158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0108 20:19:28.292363   58158 ssh_runner.go:195] Run: which crictl
	I0108 20:19:28.393356   58158 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0108 20:19:28.393493   58158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0108 20:19:28.393613   58158 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0108 20:19:28.397403   58158 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0108 20:19:28.397461   58158 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0108 20:19:28.405192   58158 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0108 20:19:28.405286   58158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 20:19:28.430425   58158 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0108 20:19:28.441171   58158 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0108 20:19:28.441240   58158 cache_images.go:92] LoadImages completed in 587.8593ms
	W0108 20:19:28.441330   58158 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	I0108 20:19:28.441416   58158 ssh_runner.go:195] Run: crio config
	I0108 20:19:28.508429   58158 cni.go:84] Creating CNI manager for ""
	I0108 20:19:28.508457   58158 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 20:19:28.508475   58158 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 20:19:28.508501   58158 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-592184 NodeName:ingress-addon-legacy-592184 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0108 20:19:28.508665   58158 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-592184"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 20:19:28.508760   58158 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-592184 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-592184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 20:19:28.508830   58158 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0108 20:19:28.519482   58158 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 20:19:28.519609   58158 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 20:19:28.528742   58158 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0108 20:19:28.547335   58158 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0108 20:19:28.565483   58158 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0108 20:19:28.583817   58158 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0108 20:19:28.586955   58158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:19:28.596983   58158 certs.go:56] Setting up /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184 for IP: 192.168.49.2
	I0108 20:19:28.597045   58158 certs.go:190] acquiring lock for shared ca certs: {Name:mk77871b3b3f5891ac4ba9a63281bc46e0e62e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:19:28.597212   58158 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17907-11003/.minikube/ca.key
	I0108 20:19:28.597255   58158 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17907-11003/.minikube/proxy-client-ca.key
	I0108 20:19:28.597299   58158 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.key
	I0108 20:19:28.597311   58158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt with IP's: []
	I0108 20:19:28.901029   58158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt ...
	I0108 20:19:28.901061   58158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt: {Name:mk41e75d014e117416436f60f1b5da23698355db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:19:28.901223   58158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.key ...
	I0108 20:19:28.901237   58158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.key: {Name:mk756417f3bdba7cde9215c1006fd212a8899325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:19:28.901305   58158 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/apiserver.key.dd3b5fb2
	I0108 20:19:28.901320   58158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 20:19:29.177397   58158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/apiserver.crt.dd3b5fb2 ...
	I0108 20:19:29.177462   58158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/apiserver.crt.dd3b5fb2: {Name:mkf6924aa35e87dbc10a9da65658f8e626085ed0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:19:29.177727   58158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/apiserver.key.dd3b5fb2 ...
	I0108 20:19:29.177755   58158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/apiserver.key.dd3b5fb2: {Name:mk6f47e2aa4d2864894509b01e4abeb1601d56d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:19:29.177851   58158 certs.go:337] copying /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/apiserver.crt
	I0108 20:19:29.177933   58158 certs.go:341] copying /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/apiserver.key
	I0108 20:19:29.177990   58158 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/proxy-client.key
	I0108 20:19:29.178010   58158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/proxy-client.crt with IP's: []
	I0108 20:19:29.766002   58158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/proxy-client.crt ...
	I0108 20:19:29.766043   58158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/proxy-client.crt: {Name:mk69a775c70e9a5b80daa10af0a09a3637a4c4b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:19:29.766271   58158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/proxy-client.key ...
	I0108 20:19:29.766287   58158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/proxy-client.key: {Name:mke939f1fbba035ee2333521bccb4778305586c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:19:29.766362   58158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 20:19:29.766382   58158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 20:19:29.766393   58158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 20:19:29.766414   58158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 20:19:29.766427   58158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 20:19:29.766440   58158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 20:19:29.766454   58158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 20:19:29.766468   58158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 20:19:29.766543   58158 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/home/jenkins/minikube-integration/17907-11003/.minikube/certs/17761.pem (1338 bytes)
	W0108 20:19:29.766584   58158 certs.go:433] ignoring /home/jenkins/minikube-integration/17907-11003/.minikube/certs/home/jenkins/minikube-integration/17907-11003/.minikube/certs/17761_empty.pem, impossibly tiny 0 bytes
	I0108 20:19:29.766596   58158 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 20:19:29.766624   58158 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem (1078 bytes)
	I0108 20:19:29.766652   58158 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/home/jenkins/minikube-integration/17907-11003/.minikube/certs/cert.pem (1123 bytes)
	I0108 20:19:29.766681   58158 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/home/jenkins/minikube-integration/17907-11003/.minikube/certs/key.pem (1679 bytes)
	I0108 20:19:29.766731   58158 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/ssl/certs/177612.pem (1708 bytes)
	I0108 20:19:29.766764   58158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/17761.pem -> /usr/share/ca-certificates/17761.pem
	I0108 20:19:29.766784   58158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/ssl/certs/177612.pem -> /usr/share/ca-certificates/177612.pem
	I0108 20:19:29.766801   58158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:19:29.767572   58158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 20:19:29.794290   58158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 20:19:29.819525   58158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 20:19:29.844848   58158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 20:19:29.871946   58158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 20:19:29.897574   58158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 20:19:29.923144   58158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 20:19:29.950334   58158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 20:19:29.974631   58158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/certs/17761.pem --> /usr/share/ca-certificates/17761.pem (1338 bytes)
	I0108 20:19:29.999601   58158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/ssl/certs/177612.pem --> /usr/share/ca-certificates/177612.pem (1708 bytes)
	I0108 20:19:30.023701   58158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 20:19:30.047665   58158 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 20:19:30.066190   58158 ssh_runner.go:195] Run: openssl version
	I0108 20:19:30.071532   58158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17761.pem && ln -fs /usr/share/ca-certificates/17761.pem /etc/ssl/certs/17761.pem"
	I0108 20:19:30.081371   58158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17761.pem
	I0108 20:19:30.085646   58158 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:16 /usr/share/ca-certificates/17761.pem
	I0108 20:19:30.085733   58158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17761.pem
	I0108 20:19:30.092932   58158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17761.pem /etc/ssl/certs/51391683.0"
	I0108 20:19:30.103507   58158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177612.pem && ln -fs /usr/share/ca-certificates/177612.pem /etc/ssl/certs/177612.pem"
	I0108 20:19:30.112745   58158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177612.pem
	I0108 20:19:30.117135   58158 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:16 /usr/share/ca-certificates/177612.pem
	I0108 20:19:30.117202   58158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177612.pem
	I0108 20:19:30.124859   58158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177612.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 20:19:30.135024   58158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 20:19:30.146172   58158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:19:30.149562   58158 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:19:30.149618   58158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:19:30.156525   58158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 20:19:30.168354   58158 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 20:19:30.172418   58158 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 20:19:30.172501   58158 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-592184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-592184 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:19:30.172595   58158 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 20:19:30.172656   58158 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 20:19:30.208711   58158 cri.go:89] found id: ""
	I0108 20:19:30.208795   58158 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 20:19:30.218486   58158 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 20:19:30.227334   58158 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 20:19:30.227432   58158 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 20:19:30.235604   58158 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 20:19:30.235656   58158 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 20:19:30.282917   58158 kubeadm.go:322] W0108 20:19:30.282196    1385 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0108 20:19:30.327236   58158 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I0108 20:19:30.406261   58158 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 20:19:32.660759   58158 kubeadm.go:322] W0108 20:19:32.660095    1385 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 20:19:32.662256   58158 kubeadm.go:322] W0108 20:19:32.661877    1385 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 20:19:40.798824   58158 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0108 20:19:40.798900   58158 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 20:19:40.799032   58158 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0108 20:19:40.799131   58158 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I0108 20:19:40.799190   58158 kubeadm.go:322] OS: Linux
	I0108 20:19:40.799262   58158 kubeadm.go:322] CGROUPS_CPU: enabled
	I0108 20:19:40.799315   58158 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0108 20:19:40.799371   58158 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0108 20:19:40.799440   58158 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0108 20:19:40.799490   58158 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0108 20:19:40.799533   58158 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0108 20:19:40.799600   58158 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 20:19:40.799677   58158 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 20:19:40.799776   58158 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 20:19:40.799886   58158 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 20:19:40.799962   58158 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 20:19:40.799996   58158 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 20:19:40.800054   58158 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 20:19:40.802749   58158 out.go:204]   - Generating certificates and keys ...
	I0108 20:19:40.802827   58158 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 20:19:40.802892   58158 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 20:19:40.802950   58158 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 20:19:40.803005   58158 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 20:19:40.803065   58158 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 20:19:40.803108   58158 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 20:19:40.803157   58158 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 20:19:40.803274   58158 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-592184 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 20:19:40.803322   58158 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 20:19:40.803503   58158 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-592184 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 20:19:40.803610   58158 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 20:19:40.803708   58158 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 20:19:40.803781   58158 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 20:19:40.803832   58158 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 20:19:40.803885   58158 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 20:19:40.803936   58158 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 20:19:40.803996   58158 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 20:19:40.804043   58158 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 20:19:40.804103   58158 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 20:19:40.805828   58158 out.go:204]   - Booting up control plane ...
	I0108 20:19:40.805965   58158 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 20:19:40.806078   58158 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 20:19:40.806156   58158 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 20:19:40.806274   58158 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 20:19:40.806505   58158 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 20:19:40.806612   58158 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.503404 seconds
	I0108 20:19:40.806705   58158 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 20:19:40.806823   58158 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 20:19:40.806919   58158 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 20:19:40.807101   58158 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-592184 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0108 20:19:40.807155   58158 kubeadm.go:322] [bootstrap-token] Using token: njdsev.nstujudhrxh21eiv
	I0108 20:19:40.808945   58158 out.go:204]   - Configuring RBAC rules ...
	I0108 20:19:40.809062   58158 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 20:19:40.809146   58158 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 20:19:40.809284   58158 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 20:19:40.809391   58158 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 20:19:40.809497   58158 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 20:19:40.809570   58158 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 20:19:40.809667   58158 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 20:19:40.809724   58158 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 20:19:40.809763   58158 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 20:19:40.809769   58158 kubeadm.go:322] 
	I0108 20:19:40.809825   58158 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 20:19:40.809831   58158 kubeadm.go:322] 
	I0108 20:19:40.809893   58158 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 20:19:40.809899   58158 kubeadm.go:322] 
	I0108 20:19:40.809927   58158 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 20:19:40.809982   58158 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 20:19:40.810024   58158 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 20:19:40.810030   58158 kubeadm.go:322] 
	I0108 20:19:40.810076   58158 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 20:19:40.810137   58158 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 20:19:40.810193   58158 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 20:19:40.810198   58158 kubeadm.go:322] 
	I0108 20:19:40.810272   58158 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 20:19:40.810338   58158 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 20:19:40.810344   58158 kubeadm.go:322] 
	I0108 20:19:40.810421   58158 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token njdsev.nstujudhrxh21eiv \
	I0108 20:19:40.810512   58158 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:5f0d3868e129d146f2f118c1d4d93dd4eee494642df3f8db5a7e17a4b1fd36d7 \
	I0108 20:19:40.810533   58158 kubeadm.go:322]     --control-plane 
	I0108 20:19:40.810545   58158 kubeadm.go:322] 
	I0108 20:19:40.810622   58158 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 20:19:40.810628   58158 kubeadm.go:322] 
	I0108 20:19:40.810696   58158 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token njdsev.nstujudhrxh21eiv \
	I0108 20:19:40.810797   58158 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:5f0d3868e129d146f2f118c1d4d93dd4eee494642df3f8db5a7e17a4b1fd36d7 
	I0108 20:19:40.810813   58158 cni.go:84] Creating CNI manager for ""
	I0108 20:19:40.810822   58158 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 20:19:40.812712   58158 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 20:19:40.814570   58158 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 20:19:40.819501   58158 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0108 20:19:40.819531   58158 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 20:19:40.839199   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 20:19:41.352368   58158 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 20:19:41.352502   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:41.352536   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28 minikube.k8s.io/name=ingress-addon-legacy-592184 minikube.k8s.io/updated_at=2024_01_08T20_19_41_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:41.360579   58158 ops.go:34] apiserver oom_adj: -16
	I0108 20:19:41.496054   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:41.996632   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:42.497047   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:42.996572   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:43.496414   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:43.997039   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:44.496401   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:44.996302   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:45.496225   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:45.996637   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:46.496698   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:46.996559   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:47.497128   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:47.996275   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:48.496901   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:48.997047   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:49.496596   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:49.996193   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:50.496429   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:50.996865   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:51.496197   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:51.996194   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:52.496278   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:52.996621   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:53.496879   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:53.996661   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:54.496291   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:54.996478   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:55.497168   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:55.996669   58158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:19:56.194082   58158 kubeadm.go:1088] duration metric: took 14.84163678s to wait for elevateKubeSystemPrivileges.
	I0108 20:19:56.194114   58158 kubeadm.go:406] StartCluster complete in 26.021622837s
	I0108 20:19:56.194135   58158 settings.go:142] acquiring lock: {Name:mk2f02a606763d8db203f5ac009c4f8430c5c61d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:19:56.194219   58158 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17907-11003/kubeconfig
	I0108 20:19:56.194891   58158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/kubeconfig: {Name:mkc68e8b275b7f7ddea94f238057103f0099d605 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:19:56.195118   58158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 20:19:56.195264   58158 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 20:19:56.195320   58158 config.go:182] Loaded profile config "ingress-addon-legacy-592184": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0108 20:19:56.195351   58158 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-592184"
	I0108 20:19:56.195399   58158 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-592184"
	I0108 20:19:56.195405   58158 addons.go:237] Setting addon storage-provisioner=true in "ingress-addon-legacy-592184"
	I0108 20:19:56.195422   58158 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-592184"
	I0108 20:19:56.195475   58158 host.go:66] Checking if "ingress-addon-legacy-592184" exists ...
	I0108 20:19:56.195810   58158 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-592184 --format={{.State.Status}}
	I0108 20:19:56.196002   58158 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-592184 --format={{.State.Status}}
	I0108 20:19:56.195895   58158 kapi.go:59] client config for ingress-addon-legacy-592184: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.key", CAFile:"/home/jenkins/minikube-integration/17907-11003/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:19:56.196694   58158 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 20:19:56.219441   58158 kapi.go:59] client config for ingress-addon-legacy-592184: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.key", CAFile:"/home/jenkins/minikube-integration/17907-11003/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:19:56.219829   58158 addons.go:237] Setting addon default-storageclass=true in "ingress-addon-legacy-592184"
	I0108 20:19:56.219908   58158 host.go:66] Checking if "ingress-addon-legacy-592184" exists ...
	I0108 20:19:56.220520   58158 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-592184 --format={{.State.Status}}
	I0108 20:19:56.230068   58158 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:19:56.232275   58158 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:19:56.232305   58158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 20:19:56.232375   58158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-592184
	I0108 20:19:56.243020   58158 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 20:19:56.243053   58158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 20:19:56.243135   58158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-592184
	I0108 20:19:56.255258   58158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/ingress-addon-legacy-592184/id_rsa Username:docker}
	I0108 20:19:56.265418   58158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/ingress-addon-legacy-592184/id_rsa Username:docker}
	I0108 20:19:56.297701   58158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 20:19:56.491351   58158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:19:56.491353   58158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 20:19:56.703723   58158 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-592184" context rescaled to 1 replicas
	I0108 20:19:56.703877   58158 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 20:19:56.705893   58158 out.go:177] * Verifying Kubernetes components...
	I0108 20:19:56.708289   58158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:19:56.829486   58158 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0108 20:19:57.137991   58158 kapi.go:59] client config for ingress-addon-legacy-592184: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.key", CAFile:"/home/jenkins/minikube-integration/17907-11003/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:19:57.138467   58158 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-592184" to be "Ready" ...
	I0108 20:19:57.197777   58158 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 20:19:57.199500   58158 addons.go:508] enable addons completed in 1.004203295s: enabled=[storage-provisioner default-storageclass]
	I0108 20:19:59.142425   58158 node_ready.go:58] node "ingress-addon-legacy-592184" has status "Ready":"False"
	I0108 20:20:01.290962   58158 node_ready.go:58] node "ingress-addon-legacy-592184" has status "Ready":"False"
	I0108 20:20:03.642624   58158 node_ready.go:58] node "ingress-addon-legacy-592184" has status "Ready":"False"
	I0108 20:20:06.141949   58158 node_ready.go:58] node "ingress-addon-legacy-592184" has status "Ready":"False"
	I0108 20:20:08.143136   58158 node_ready.go:58] node "ingress-addon-legacy-592184" has status "Ready":"False"
	I0108 20:20:10.641702   58158 node_ready.go:58] node "ingress-addon-legacy-592184" has status "Ready":"False"
	I0108 20:20:11.142919   58158 node_ready.go:49] node "ingress-addon-legacy-592184" has status "Ready":"True"
	I0108 20:20:11.142947   58158 node_ready.go:38] duration metric: took 14.004441314s waiting for node "ingress-addon-legacy-592184" to be "Ready" ...
	I0108 20:20:11.142960   58158 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:20:11.150882   58158 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-7pvlb" in "kube-system" namespace to be "Ready" ...
	I0108 20:20:13.154771   58158 pod_ready.go:102] pod "coredns-66bff467f8-7pvlb" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-08 20:19:56 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0108 20:20:15.154847   58158 pod_ready.go:102] pod "coredns-66bff467f8-7pvlb" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-08 20:19:56 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0108 20:20:17.158931   58158 pod_ready.go:102] pod "coredns-66bff467f8-7pvlb" in "kube-system" namespace has status "Ready":"False"
	I0108 20:20:19.158975   58158 pod_ready.go:92] pod "coredns-66bff467f8-7pvlb" in "kube-system" namespace has status "Ready":"True"
	I0108 20:20:19.159012   58158 pod_ready.go:81] duration metric: took 8.00808223s waiting for pod "coredns-66bff467f8-7pvlb" in "kube-system" namespace to be "Ready" ...
	I0108 20:20:19.159027   58158 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-592184" in "kube-system" namespace to be "Ready" ...
	I0108 20:20:19.165760   58158 pod_ready.go:92] pod "etcd-ingress-addon-legacy-592184" in "kube-system" namespace has status "Ready":"True"
	I0108 20:20:19.165808   58158 pod_ready.go:81] duration metric: took 6.772786ms waiting for pod "etcd-ingress-addon-legacy-592184" in "kube-system" namespace to be "Ready" ...
	I0108 20:20:19.165834   58158 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-592184" in "kube-system" namespace to be "Ready" ...
	I0108 20:20:19.172041   58158 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-592184" in "kube-system" namespace has status "Ready":"True"
	I0108 20:20:19.172076   58158 pod_ready.go:81] duration metric: took 6.232013ms waiting for pod "kube-apiserver-ingress-addon-legacy-592184" in "kube-system" namespace to be "Ready" ...
	I0108 20:20:19.172094   58158 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-592184" in "kube-system" namespace to be "Ready" ...
	I0108 20:20:19.177006   58158 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-592184" in "kube-system" namespace has status "Ready":"True"
	I0108 20:20:19.177035   58158 pod_ready.go:81] duration metric: took 4.931835ms waiting for pod "kube-controller-manager-ingress-addon-legacy-592184" in "kube-system" namespace to be "Ready" ...
	I0108 20:20:19.177045   58158 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ghgnl" in "kube-system" namespace to be "Ready" ...
	I0108 20:20:19.182672   58158 pod_ready.go:92] pod "kube-proxy-ghgnl" in "kube-system" namespace has status "Ready":"True"
	I0108 20:20:19.182702   58158 pod_ready.go:81] duration metric: took 5.649256ms waiting for pod "kube-proxy-ghgnl" in "kube-system" namespace to be "Ready" ...
	I0108 20:20:19.182716   58158 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-592184" in "kube-system" namespace to be "Ready" ...
	I0108 20:20:19.352074   58158 request.go:629] Waited for 169.264957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-592184
	I0108 20:20:19.552652   58158 request.go:629] Waited for 196.386548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-592184
	I0108 20:20:19.556202   58158 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-592184" in "kube-system" namespace has status "Ready":"True"
	I0108 20:20:19.556235   58158 pod_ready.go:81] duration metric: took 373.5105ms waiting for pod "kube-scheduler-ingress-addon-legacy-592184" in "kube-system" namespace to be "Ready" ...
	I0108 20:20:19.556248   58158 pod_ready.go:38] duration metric: took 8.413275213s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:20:19.556301   58158 api_server.go:52] waiting for apiserver process to appear ...
	I0108 20:20:19.556380   58158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:20:19.569717   58158 api_server.go:72] duration metric: took 22.865746811s to wait for apiserver process to appear ...
	I0108 20:20:19.569754   58158 api_server.go:88] waiting for apiserver healthz status ...
	I0108 20:20:19.569776   58158 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0108 20:20:19.574812   58158 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0108 20:20:19.575808   58158 api_server.go:141] control plane version: v1.18.20
	I0108 20:20:19.575848   58158 api_server.go:131] duration metric: took 6.085804ms to wait for apiserver health ...
	I0108 20:20:19.575858   58158 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 20:20:19.752435   58158 request.go:629] Waited for 176.389123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0108 20:20:19.758839   58158 system_pods.go:59] 8 kube-system pods found
	I0108 20:20:19.758890   58158 system_pods.go:61] "coredns-66bff467f8-7pvlb" [78334607-29ac-46b1-8ca8-6f5ef14d02b3] Running
	I0108 20:20:19.758898   58158 system_pods.go:61] "etcd-ingress-addon-legacy-592184" [dfa2b159-a2a3-497d-91d9-bd7a903a0858] Running
	I0108 20:20:19.758907   58158 system_pods.go:61] "kindnet-7nrcw" [c0584e9a-176a-417c-a086-e99fbc410faf] Running
	I0108 20:20:19.758912   58158 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-592184" [fd75f2a4-88f4-4128-be71-2706e68ad1a9] Running
	I0108 20:20:19.758917   58158 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-592184" [22395f12-aa20-45e2-adb9-181e08bff50c] Running
	I0108 20:20:19.758924   58158 system_pods.go:61] "kube-proxy-ghgnl" [2855c2bd-3745-4d68-840a-5bd8e69f81f7] Running
	I0108 20:20:19.758928   58158 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-592184" [4f9777f7-8fa8-4ad0-bc62-07ff5e86facf] Running
	I0108 20:20:19.758936   58158 system_pods.go:61] "storage-provisioner" [244b66a9-8f49-4fb2-abd8-8e3a001ddc4c] Running
	I0108 20:20:19.758945   58158 system_pods.go:74] duration metric: took 183.080501ms to wait for pod list to return data ...
	I0108 20:20:19.758955   58158 default_sa.go:34] waiting for default service account to be created ...
	I0108 20:20:19.952423   58158 request.go:629] Waited for 193.360066ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0108 20:20:19.955223   58158 default_sa.go:45] found service account: "default"
	I0108 20:20:19.955255   58158 default_sa.go:55] duration metric: took 196.292607ms for default service account to be created ...
	I0108 20:20:19.955267   58158 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 20:20:20.152885   58158 request.go:629] Waited for 197.522604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0108 20:20:20.159816   58158 system_pods.go:86] 8 kube-system pods found
	I0108 20:20:20.159844   58158 system_pods.go:89] "coredns-66bff467f8-7pvlb" [78334607-29ac-46b1-8ca8-6f5ef14d02b3] Running
	I0108 20:20:20.159849   58158 system_pods.go:89] "etcd-ingress-addon-legacy-592184" [dfa2b159-a2a3-497d-91d9-bd7a903a0858] Running
	I0108 20:20:20.159853   58158 system_pods.go:89] "kindnet-7nrcw" [c0584e9a-176a-417c-a086-e99fbc410faf] Running
	I0108 20:20:20.159858   58158 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-592184" [fd75f2a4-88f4-4128-be71-2706e68ad1a9] Running
	I0108 20:20:20.159861   58158 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-592184" [22395f12-aa20-45e2-adb9-181e08bff50c] Running
	I0108 20:20:20.159868   58158 system_pods.go:89] "kube-proxy-ghgnl" [2855c2bd-3745-4d68-840a-5bd8e69f81f7] Running
	I0108 20:20:20.159875   58158 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-592184" [4f9777f7-8fa8-4ad0-bc62-07ff5e86facf] Running
	I0108 20:20:20.159883   58158 system_pods.go:89] "storage-provisioner" [244b66a9-8f49-4fb2-abd8-8e3a001ddc4c] Running
	I0108 20:20:20.159892   58158 system_pods.go:126] duration metric: took 204.618509ms to wait for k8s-apps to be running ...
	I0108 20:20:20.159910   58158 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 20:20:20.159998   58158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:20:20.173536   58158 system_svc.go:56] duration metric: took 13.609047ms WaitForService to wait for kubelet.
	I0108 20:20:20.173571   58158 kubeadm.go:581] duration metric: took 23.469617362s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 20:20:20.173595   58158 node_conditions.go:102] verifying NodePressure condition ...
	I0108 20:20:20.352034   58158 request.go:629] Waited for 178.33168ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0108 20:20:20.356013   58158 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 20:20:20.356054   58158 node_conditions.go:123] node cpu capacity is 8
	I0108 20:20:20.356072   58158 node_conditions.go:105] duration metric: took 182.470477ms to run NodePressure ...
	I0108 20:20:20.356088   58158 start.go:228] waiting for startup goroutines ...
	I0108 20:20:20.356096   58158 start.go:233] waiting for cluster config update ...
	I0108 20:20:20.356112   58158 start.go:242] writing updated cluster config ...
	I0108 20:20:20.356531   58158 ssh_runner.go:195] Run: rm -f paused
	I0108 20:20:20.409016   58158 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I0108 20:20:20.411598   58158 out.go:177] 
	W0108 20:20:20.413544   58158 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0108 20:20:20.415444   58158 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0108 20:20:20.417081   58158 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-592184" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 08 20:23:09 ingress-addon-legacy-592184 crio[959]: time="2024-01-08 20:23:09.571442861Z" level=info msg="Created container 32e5899406633122661e05b04e37de76821e2695789897a9a36d2d5ee2feec3f: default/hello-world-app-5f5d8b66bb-zdx9g/hello-world-app" id=50c1da27-e4c2-4aa9-b5e5-83cb2577240d name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jan 08 20:23:09 ingress-addon-legacy-592184 crio[959]: time="2024-01-08 20:23:09.572111513Z" level=info msg="Starting container: 32e5899406633122661e05b04e37de76821e2695789897a9a36d2d5ee2feec3f" id=7052ef5a-c81f-4d9c-a900-a8a8255b6358 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jan 08 20:23:09 ingress-addon-legacy-592184 crio[959]: time="2024-01-08 20:23:09.580148525Z" level=info msg="Started container" PID=4825 containerID=32e5899406633122661e05b04e37de76821e2695789897a9a36d2d5ee2feec3f description=default/hello-world-app-5f5d8b66bb-zdx9g/hello-world-app id=7052ef5a-c81f-4d9c-a900-a8a8255b6358 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=1ab6eba643a7ddf91d86bfa8eb59ba72b0ee13d527f4e15a97ebd25a2b3cbf76
	Jan 08 20:23:17 ingress-addon-legacy-592184 crio[959]: time="2024-01-08 20:23:17.018774630Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=e884dc88-0585-4e4e-ba07-71ee17d2bee8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jan 08 20:23:25 ingress-addon-legacy-592184 crio[959]: time="2024-01-08 20:23:25.019513178Z" level=info msg="Stopping pod sandbox: 523ccae5ddb6d46f55d1ca01ba88760f35309c7740b76b582a82b6879933be78" id=92dc7db9-873b-4c28-a2cd-cadff74e3a31 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 20:23:25 ingress-addon-legacy-592184 crio[959]: time="2024-01-08 20:23:25.020904841Z" level=info msg="Stopped pod sandbox: 523ccae5ddb6d46f55d1ca01ba88760f35309c7740b76b582a82b6879933be78" id=92dc7db9-873b-4c28-a2cd-cadff74e3a31 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 20:23:25 ingress-addon-legacy-592184 crio[959]: time="2024-01-08 20:23:25.545556697Z" level=info msg="Stopping pod sandbox: 523ccae5ddb6d46f55d1ca01ba88760f35309c7740b76b582a82b6879933be78" id=8ecdfc41-e5e2-4250-ad86-91e9c4b8a252 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 20:23:25 ingress-addon-legacy-592184 crio[959]: time="2024-01-08 20:23:25.545624071Z" level=info msg="Stopped pod sandbox (already stopped): 523ccae5ddb6d46f55d1ca01ba88760f35309c7740b76b582a82b6879933be78" id=8ecdfc41-e5e2-4250-ad86-91e9c4b8a252 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 20:23:26 ingress-addon-legacy-592184 crio[959]: time="2024-01-08 20:23:26.387591694Z" level=info msg="Stopping container: d67c3686e29d6b11fa9ad608f662d1697bae6ba1b0bdcc7e1b84d80c870810ff (timeout: 2s)" id=b4eb06ff-80f6-4e0a-806d-bd838dcbfdf1 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 08 20:23:26 ingress-addon-legacy-592184 crio[959]: time="2024-01-08 20:23:26.391399481Z" level=info msg="Stopping container: d67c3686e29d6b11fa9ad608f662d1697bae6ba1b0bdcc7e1b84d80c870810ff (timeout: 2s)" id=36fdc2b9-9e8d-4a19-af35-1e2d4201ba89 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 08 20:23:28 ingress-addon-legacy-592184 crio[959]: time="2024-01-08 20:23:28.397079387Z" level=warning msg="Stopping container d67c3686e29d6b11fa9ad608f662d1697bae6ba1b0bdcc7e1b84d80c870810ff with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=b4eb06ff-80f6-4e0a-806d-bd838dcbfdf1 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 08 20:23:28 ingress-addon-legacy-592184 conmon[3358]: conmon d67c3686e29d6b11fa9a <ninfo>: container 3370 exited with status 137
	Jan 08 20:23:28 ingress-addon-legacy-592184 crio[959]: time="2024-01-08 20:23:28.549854479Z" level=info msg="Stopped container d67c3686e29d6b11fa9ad608f662d1697bae6ba1b0bdcc7e1b84d80c870810ff: ingress-nginx/ingress-nginx-controller-7fcf777cb7-xwg6r/controller" id=36fdc2b9-9e8d-4a19-af35-1e2d4201ba89 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 08 20:23:28 ingress-addon-legacy-592184 crio[959]: time="2024-01-08 20:23:28.549907547Z" level=info msg="Stopped container d67c3686e29d6b11fa9ad608f662d1697bae6ba1b0bdcc7e1b84d80c870810ff: ingress-nginx/ingress-nginx-controller-7fcf777cb7-xwg6r/controller" id=b4eb06ff-80f6-4e0a-806d-bd838dcbfdf1 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 08 20:23:28 ingress-addon-legacy-592184 crio[959]: time="2024-01-08 20:23:28.550612922Z" level=info msg="Stopping pod sandbox: bc3ee83ddb338cc483cb61e4b53dcb4d16673dfa37110e19dbf601ce60e26115" id=938695cf-eaf8-4408-940f-dbef5d276462 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 20:23:28 ingress-addon-legacy-592184 crio[959]: time="2024-01-08 20:23:28.550610931Z" level=info msg="Stopping pod sandbox: bc3ee83ddb338cc483cb61e4b53dcb4d16673dfa37110e19dbf601ce60e26115" id=adecf19a-8964-4da4-afe6-606a52ec3bf8 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 20:23:28 ingress-addon-legacy-592184 crio[959]: time="2024-01-08 20:23:28.553794433Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-DKBL32BJPC2AGZ5E - [0:0]\n:KUBE-HP-JGSGZX2BBKXMHD2M - [0:0]\n-X KUBE-HP-JGSGZX2BBKXMHD2M\n-X KUBE-HP-DKBL32BJPC2AGZ5E\nCOMMIT\n"
	Jan 08 20:23:28 ingress-addon-legacy-592184 crio[959]: time="2024-01-08 20:23:28.555433362Z" level=info msg="Closing host port tcp:80"
	Jan 08 20:23:28 ingress-addon-legacy-592184 crio[959]: time="2024-01-08 20:23:28.555492527Z" level=info msg="Closing host port tcp:443"
	Jan 08 20:23:28 ingress-addon-legacy-592184 crio[959]: time="2024-01-08 20:23:28.556564161Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jan 08 20:23:28 ingress-addon-legacy-592184 crio[959]: time="2024-01-08 20:23:28.556583577Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jan 08 20:23:28 ingress-addon-legacy-592184 crio[959]: time="2024-01-08 20:23:28.556709179Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-xwg6r Namespace:ingress-nginx ID:bc3ee83ddb338cc483cb61e4b53dcb4d16673dfa37110e19dbf601ce60e26115 UID:7e5f914c-707e-42be-9857-f85263ded834 NetNS:/var/run/netns/8c801007-35d4-4acd-b042-7c81a1e59afe Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 08 20:23:28 ingress-addon-legacy-592184 crio[959]: time="2024-01-08 20:23:28.556822748Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-xwg6r from CNI network \"kindnet\" (type=ptp)"
	Jan 08 20:23:28 ingress-addon-legacy-592184 crio[959]: time="2024-01-08 20:23:28.597095497Z" level=info msg="Stopped pod sandbox: bc3ee83ddb338cc483cb61e4b53dcb4d16673dfa37110e19dbf601ce60e26115" id=938695cf-eaf8-4408-940f-dbef5d276462 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 20:23:28 ingress-addon-legacy-592184 crio[959]: time="2024-01-08 20:23:28.597214735Z" level=info msg="Stopped pod sandbox (already stopped): bc3ee83ddb338cc483cb61e4b53dcb4d16673dfa37110e19dbf601ce60e26115" id=adecf19a-8964-4da4-afe6-606a52ec3bf8 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	32e5899406633       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            24 seconds ago      Running             hello-world-app           0                   1ab6eba643a7d       hello-world-app-5f5d8b66bb-zdx9g
	f0098441cea9c       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                    2 minutes ago       Running             nginx                     0                   de7444fca5a99       nginx
	d67c3686e29d6       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   bc3ee83ddb338       ingress-nginx-controller-7fcf777cb7-xwg6r
	b99d75b12f6d3       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   ac5440bbca451       ingress-nginx-admission-patch-qf5s2
	6446568a64490       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   25a3426e60789       ingress-nginx-admission-create-jkhhj
	39a1619cb225c       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   6e9a89f2b1dde       coredns-66bff467f8-7pvlb
	214771888ee15       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   9a8d9e970d5a6       storage-provisioner
	55cb132ee8883       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   3f9aca04ae294       kindnet-7nrcw
	4f54dde5ea765       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   dd3f5ecdbbe9f       kube-proxy-ghgnl
	4c3bac58253e0       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   f566278ce7bcb       kube-controller-manager-ingress-addon-legacy-592184
	81bd68a0f622b       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   bbf6182e877a1       kube-apiserver-ingress-addon-legacy-592184
	32ea6c5859ade       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   e9285769dbc0e       etcd-ingress-addon-legacy-592184
	e0880e697a988       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   b0caaba5ec028       kube-scheduler-ingress-addon-legacy-592184
	
	
	==> coredns [39a1619cb225c0bc6e589069b9d403153b0a78c7b2ac76262b7d27e6af392ffd] <==
	[INFO] 10.244.0.5:49733 - 4882 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00506012s
	[INFO] 10.244.0.5:56132 - 57497 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004726775s
	[INFO] 10.244.0.5:48272 - 18288 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004989666s
	[INFO] 10.244.0.5:54837 - 41658 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005102585s
	[INFO] 10.244.0.5:60557 - 17450 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005794093s
	[INFO] 10.244.0.5:38896 - 7341 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005861611s
	[INFO] 10.244.0.5:56052 - 2013 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005889464s
	[INFO] 10.244.0.5:38896 - 1599 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000087663s
	[INFO] 10.244.0.5:56052 - 63068 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000048671s
	[INFO] 10.244.0.5:60557 - 25052 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000116615s
	[INFO] 10.244.0.5:56132 - 44648 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006559283s
	[INFO] 10.244.0.5:49733 - 62089 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006766705s
	[INFO] 10.244.0.5:48272 - 20870 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006773595s
	[INFO] 10.244.0.5:54837 - 62798 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006750093s
	[INFO] 10.244.0.5:40574 - 56783 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006881961s
	[INFO] 10.244.0.5:54837 - 23642 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005431311s
	[INFO] 10.244.0.5:56132 - 3388 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005707868s
	[INFO] 10.244.0.5:54837 - 54603 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000054694s
	[INFO] 10.244.0.5:56132 - 34805 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000024365s
	[INFO] 10.244.0.5:49733 - 59586 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005824808s
	[INFO] 10.244.0.5:48272 - 57264 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006468461s
	[INFO] 10.244.0.5:40574 - 39647 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006415982s
	[INFO] 10.244.0.5:49733 - 59694 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000081535s
	[INFO] 10.244.0.5:48272 - 2378 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000136329s
	[INFO] 10.244.0.5:40574 - 25847 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000078228s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-592184
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-592184
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=ingress-addon-legacy-592184
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T20_19_41_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 20:19:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-592184
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 20:23:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 20:23:11 +0000   Mon, 08 Jan 2024 20:19:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 20:23:11 +0000   Mon, 08 Jan 2024 20:19:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 20:23:11 +0000   Mon, 08 Jan 2024 20:19:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 20:23:11 +0000   Mon, 08 Jan 2024 20:20:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-592184
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 f26a5d14f8d34d2497c67644027865b9
	  System UUID:                c52359cb-4f35-4f0e-b5d8-7f43e7ffee28
	  Boot ID:                    0e88edaa-666a-4348-8c8d-059e8a9aec1e
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-zdx9g                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 coredns-66bff467f8-7pvlb                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m38s
	  kube-system                 etcd-ingress-addon-legacy-592184                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kindnet-7nrcw                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m38s
	  kube-system                 kube-apiserver-ingress-addon-legacy-592184             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-592184    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-proxy-ghgnl                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kube-scheduler-ingress-addon-legacy-592184             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  4m1s (x4 over 4m1s)  kubelet     Node ingress-addon-legacy-592184 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m1s (x4 over 4m1s)  kubelet     Node ingress-addon-legacy-592184 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m1s (x3 over 4m1s)  kubelet     Node ingress-addon-legacy-592184 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m54s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m53s                kubelet     Node ingress-addon-legacy-592184 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m53s                kubelet     Node ingress-addon-legacy-592184 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m53s                kubelet     Node ingress-addon-legacy-592184 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m37s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m23s                kubelet     Node ingress-addon-legacy-592184 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.004992] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006676] FS-Cache: N-cookie d=00000000a15bc294{9p.inode} n=000000005063de54
	[  +0.008878] FS-Cache: N-key=[8] '99a00f0200000000'
	[  +3.062042] FS-Cache: Duplicate cookie detected
	[  +0.004771] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006803] FS-Cache: O-cookie d=0000000075a83ff0{9P.session} n=00000000d5056a16
	[  +0.007554] FS-Cache: O-key=[10] '34323935383035373738'
	[  +0.005406] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006649] FS-Cache: N-cookie d=0000000075a83ff0{9P.session} n=00000000a5112cf2
	[  +0.008936] FS-Cache: N-key=[10] '34323935383035373738'
	[  +6.625953] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan 8 20:20] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 d4 04 8a 97 99 16 d1 1a 51 ed b4 08 00
	[  +1.026420] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 d4 04 8a 97 99 16 d1 1a 51 ed b4 08 00
	[  +2.015866] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000042] ll header: 00000000: 72 d4 04 8a 97 99 16 d1 1a 51 ed b4 08 00
	[Jan 8 20:21] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 72 d4 04 8a 97 99 16 d1 1a 51 ed b4 08 00
	[  +8.191099] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000012] ll header: 00000000: 72 d4 04 8a 97 99 16 d1 1a 51 ed b4 08 00
	[ +16.130308] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 72 d4 04 8a 97 99 16 d1 1a 51 ed b4 08 00
	[Jan 8 20:22] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 72 d4 04 8a 97 99 16 d1 1a 51 ed b4 08 00
	
	
	==> etcd [32ea6c5859ade126d564e612fbb41655f0ae8afdef35b0ec51e52f55cc1ee694] <==
	raft2024/01/08 20:19:34 INFO: aec36adc501070cc became follower at term 1
	raft2024/01/08 20:19:34 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-08 20:19:34.111024 W | auth: simple token is not cryptographically signed
	2024-01-08 20:19:34.117554 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-01-08 20:19:34.117820 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/01/08 20:19:34 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-08 20:19:34.118836 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2024-01-08 20:19:34.120212 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-08 20:19:34.120369 I | embed: listening for peers on 192.168.49.2:2380
	2024-01-08 20:19:34.120505 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/01/08 20:19:34 INFO: aec36adc501070cc is starting a new election at term 1
	raft2024/01/08 20:19:34 INFO: aec36adc501070cc became candidate at term 2
	raft2024/01/08 20:19:34 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2024/01/08 20:19:34 INFO: aec36adc501070cc became leader at term 2
	raft2024/01/08 20:19:34 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2024-01-08 20:19:34.907421 I | etcdserver: published {Name:ingress-addon-legacy-592184 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2024-01-08 20:19:34.907458 I | embed: ready to serve client requests
	2024-01-08 20:19:34.907567 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-08 20:19:34.907633 I | embed: ready to serve client requests
	2024-01-08 20:19:34.908292 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-08 20:19:34.908396 I | etcdserver/api: enabled capabilities for version 3.4
	2024-01-08 20:19:34.909725 I | embed: serving client requests on 192.168.49.2:2379
	2024-01-08 20:19:34.909800 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-08 20:20:00.266658 W | etcdserver: read-only range request "key:\"/registry/minions/ingress-addon-legacy-592184\" " with result "range_response_count:1 size:6674" took too long (126.141596ms) to execute
	2024-01-08 20:20:01.288683 W | etcdserver: read-only range request "key:\"/registry/minions/ingress-addon-legacy-592184\" " with result "range_response_count:1 size:6674" took too long (147.524942ms) to execute
	
	
	==> kernel <==
	 20:23:34 up  1:05,  0 users,  load average: 0.71, 1.01, 0.72
	Linux ingress-addon-legacy-592184 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [55cb132ee88834103472911f4f681e724bff1f3877cfa36bedf483fd626dc84f] <==
	I0108 20:21:30.506288       1 main.go:227] handling current node
	I0108 20:21:40.516128       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:21:40.516167       1 main.go:227] handling current node
	I0108 20:21:50.522020       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:21:50.522057       1 main.go:227] handling current node
	I0108 20:22:00.527850       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:22:00.527881       1 main.go:227] handling current node
	I0108 20:22:10.532769       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:22:10.532802       1 main.go:227] handling current node
	I0108 20:22:20.545768       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:22:20.545809       1 main.go:227] handling current node
	I0108 20:22:30.551220       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:22:30.551269       1 main.go:227] handling current node
	I0108 20:22:40.563280       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:22:40.563310       1 main.go:227] handling current node
	I0108 20:22:50.567258       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:22:50.567304       1 main.go:227] handling current node
	I0108 20:23:00.572078       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:23:00.572111       1 main.go:227] handling current node
	I0108 20:23:10.577160       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:23:10.577197       1 main.go:227] handling current node
	I0108 20:23:20.589819       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:23:20.589858       1 main.go:227] handling current node
	I0108 20:23:30.595863       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 20:23:30.595903       1 main.go:227] handling current node
	
	
	==> kube-apiserver [81bd68a0f622bd7104bff795c663134178c14dd720f612dfd8fa3888552d86f4] <==
	I0108 20:19:37.748144       1 establishing_controller.go:76] Starting EstablishingController
	E0108 20:19:37.754237       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0108 20:19:37.887839       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0108 20:19:37.888126       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 20:19:37.888157       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 20:19:37.888171       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0108 20:19:37.888194       1 cache.go:39] Caches are synced for autoregister controller
	I0108 20:19:38.747464       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0108 20:19:38.747511       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0108 20:19:38.753275       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0108 20:19:38.756841       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0108 20:19:38.756882       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0108 20:19:39.097849       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 20:19:39.136484       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0108 20:19:39.231275       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0108 20:19:39.232300       1 controller.go:609] quota admission added evaluator for: endpoints
	I0108 20:19:39.235386       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 20:19:40.060447       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0108 20:19:40.605594       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0108 20:19:40.719764       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0108 20:19:41.003510       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 20:19:56.013815       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0108 20:19:56.281744       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0108 20:20:21.212965       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0108 20:20:46.246234       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	
	==> kube-controller-manager [4c3bac58253e07ea4b4120a8e2148f07b07ad1ad060f338d97e590ddab4a7fcf] <==
	I0108 20:19:56.515254       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
	I0108 20:19:56.588184       1 shared_informer.go:230] Caches are synced for resource quota 
	I0108 20:19:56.588191       1 shared_informer.go:230] Caches are synced for resource quota 
	I0108 20:19:56.608815       1 shared_informer.go:230] Caches are synced for PV protection 
	I0108 20:19:56.612849       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0108 20:19:56.687638       1 shared_informer.go:230] Caches are synced for attach detach 
	I0108 20:19:56.687726       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0108 20:19:56.687746       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	E0108 20:19:56.690107       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I0108 20:19:56.690690       1 shared_informer.go:230] Caches are synced for expand 
	I0108 20:19:56.987734       1 request.go:621] Throttling request took 1.06548742s, request: GET:https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1?timeout=32s
	I0108 20:19:57.415689       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I0108 20:19:57.415751       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0108 20:20:16.024536       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0108 20:20:16.024866       1 event.go:278] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"storage-provisioner", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Cancelling deletion of Pod kube-system/storage-provisioner
	I0108 20:20:16.024949       1 event.go:278] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"coredns-66bff467f8-7pvlb", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Cancelling deletion of Pod kube-system/coredns-66bff467f8-7pvlb
	I0108 20:20:21.202548       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"5c5df0da-2ea3-4d56-a604-abfab17682dc", APIVersion:"apps/v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0108 20:20:21.210295       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"5a7000ae-9b9f-4ba9-a1f6-a9ac3d8d9410", APIVersion:"apps/v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-xwg6r
	I0108 20:20:21.289620       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"de8de660-3b50-4b86-871a-3f53ad7f33bc", APIVersion:"batch/v1", ResourceVersion:"464", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-jkhhj
	I0108 20:20:21.311796       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"5c60f135-e38d-45bc-b646-70c81a79b21b", APIVersion:"batch/v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-qf5s2
	I0108 20:20:24.204825       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"de8de660-3b50-4b86-871a-3f53ad7f33bc", APIVersion:"batch/v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0108 20:20:25.205600       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"5c60f135-e38d-45bc-b646-70c81a79b21b", APIVersion:"batch/v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0108 20:23:07.582566       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"5a503489-10a1-4870-aed5-ece672d7d5cb", APIVersion:"apps/v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0108 20:23:07.589931       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"8b982c21-3dae-429c-8115-bbe71c493e09", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-zdx9g
	E0108 20:23:31.113350       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-k5mtm" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	
	==> kube-proxy [4f54dde5ea7657ba5e888dad1ca9b8a5455f30c447b782c582be5627fb44b6ba] <==
	W0108 20:19:57.196865       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0108 20:19:57.205071       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0108 20:19:57.205112       1 server_others.go:186] Using iptables Proxier.
	I0108 20:19:57.205502       1 server.go:583] Version: v1.18.20
	I0108 20:19:57.206095       1 config.go:315] Starting service config controller
	I0108 20:19:57.206116       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0108 20:19:57.206475       1 config.go:133] Starting endpoints config controller
	I0108 20:19:57.206695       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0108 20:19:57.306347       1 shared_informer.go:230] Caches are synced for service config 
	I0108 20:19:57.306961       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [e0880e697a988474ded91715f79494e52b95a60dc51ca00e843114f66353b4c5] <==
	W0108 20:19:37.797998       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0108 20:19:37.809713       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0108 20:19:37.809749       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0108 20:19:37.812467       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 20:19:37.812664       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 20:19:37.813435       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0108 20:19:37.887815       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0108 20:19:37.889306       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 20:19:37.889777       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 20:19:37.890052       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 20:19:37.890161       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 20:19:37.890269       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 20:19:37.890481       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 20:19:37.891387       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 20:19:37.891414       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 20:19:37.891531       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 20:19:37.891537       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 20:19:37.891662       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 20:19:37.891676       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 20:19:38.819774       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 20:19:38.880628       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 20:19:38.891412       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 20:19:38.927040       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 20:19:38.932230       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0108 20:19:40.912897       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Jan 08 20:22:54 ingress-addon-legacy-592184 kubelet[1860]: E0108 20:22:54.019534    1860 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 20:22:54 ingress-addon-legacy-592184 kubelet[1860]: E0108 20:22:54.019564    1860 pod_workers.go:191] Error syncing pod 34b8a97f-9ac2-4f5f-9e4b-01b3ee48c753 ("kube-ingress-dns-minikube_kube-system(34b8a97f-9ac2-4f5f-9e4b-01b3ee48c753)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jan 08 20:23:06 ingress-addon-legacy-592184 kubelet[1860]: E0108 20:23:06.019036    1860 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 20:23:06 ingress-addon-legacy-592184 kubelet[1860]: E0108 20:23:06.019091    1860 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 20:23:06 ingress-addon-legacy-592184 kubelet[1860]: E0108 20:23:06.019148    1860 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 20:23:06 ingress-addon-legacy-592184 kubelet[1860]: E0108 20:23:06.019181    1860 pod_workers.go:191] Error syncing pod 34b8a97f-9ac2-4f5f-9e4b-01b3ee48c753 ("kube-ingress-dns-minikube_kube-system(34b8a97f-9ac2-4f5f-9e4b-01b3ee48c753)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jan 08 20:23:07 ingress-addon-legacy-592184 kubelet[1860]: I0108 20:23:07.596180    1860 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 08 20:23:07 ingress-addon-legacy-592184 kubelet[1860]: I0108 20:23:07.715949    1860 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-w9cd4" (UniqueName: "kubernetes.io/secret/c6019306-065f-4b27-bfe5-db752f53a9d0-default-token-w9cd4") pod "hello-world-app-5f5d8b66bb-zdx9g" (UID: "c6019306-065f-4b27-bfe5-db752f53a9d0")
	Jan 08 20:23:07 ingress-addon-legacy-592184 kubelet[1860]: W0108 20:23:07.960417    1860 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/224c03443594133028ec15a6449fc15c7f3fe3d0044d3860d8dd06e14a665f7d/crio-1ab6eba643a7ddf91d86bfa8eb59ba72b0ee13d527f4e15a97ebd25a2b3cbf76 WatchSource:0}: Error finding container 1ab6eba643a7ddf91d86bfa8eb59ba72b0ee13d527f4e15a97ebd25a2b3cbf76: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000627020 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Jan 08 20:23:17 ingress-addon-legacy-592184 kubelet[1860]: E0108 20:23:17.019140    1860 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 20:23:17 ingress-addon-legacy-592184 kubelet[1860]: E0108 20:23:17.019180    1860 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 20:23:17 ingress-addon-legacy-592184 kubelet[1860]: E0108 20:23:17.019223    1860 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 20:23:17 ingress-addon-legacy-592184 kubelet[1860]: E0108 20:23:17.019249    1860 pod_workers.go:191] Error syncing pod 34b8a97f-9ac2-4f5f-9e4b-01b3ee48c753 ("kube-ingress-dns-minikube_kube-system(34b8a97f-9ac2-4f5f-9e4b-01b3ee48c753)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jan 08 20:23:23 ingress-addon-legacy-592184 kubelet[1860]: I0108 20:23:23.465450    1860 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-pd8lk" (UniqueName: "kubernetes.io/secret/34b8a97f-9ac2-4f5f-9e4b-01b3ee48c753-minikube-ingress-dns-token-pd8lk") pod "34b8a97f-9ac2-4f5f-9e4b-01b3ee48c753" (UID: "34b8a97f-9ac2-4f5f-9e4b-01b3ee48c753")
	Jan 08 20:23:23 ingress-addon-legacy-592184 kubelet[1860]: I0108 20:23:23.467563    1860 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b8a97f-9ac2-4f5f-9e4b-01b3ee48c753-minikube-ingress-dns-token-pd8lk" (OuterVolumeSpecName: "minikube-ingress-dns-token-pd8lk") pod "34b8a97f-9ac2-4f5f-9e4b-01b3ee48c753" (UID: "34b8a97f-9ac2-4f5f-9e4b-01b3ee48c753"). InnerVolumeSpecName "minikube-ingress-dns-token-pd8lk". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 20:23:23 ingress-addon-legacy-592184 kubelet[1860]: I0108 20:23:23.565844    1860 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-pd8lk" (UniqueName: "kubernetes.io/secret/34b8a97f-9ac2-4f5f-9e4b-01b3ee48c753-minikube-ingress-dns-token-pd8lk") on node "ingress-addon-legacy-592184" DevicePath ""
	Jan 08 20:23:26 ingress-addon-legacy-592184 kubelet[1860]: E0108 20:23:26.389232    1860 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-xwg6r.17a878ea3c2e2896", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-xwg6r", UID:"7e5f914c-707e-42be-9857-f85263ded834", APIVersion:"v1", ResourceVersion:"466", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-592184"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15f344f97123c96, ext:225919953960, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15f344f97123c96, ext:225919953960, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-xwg6r.17a878ea3c2e2896" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 08 20:23:26 ingress-addon-legacy-592184 kubelet[1860]: E0108 20:23:26.395597    1860 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-xwg6r.17a878ea3c2e2896", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-xwg6r", UID:"7e5f914c-707e-42be-9857-f85263ded834", APIVersion:"v1", ResourceVersion:"466", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-592184"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15f344f97123c96, ext:225919953960, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15f344f974c3741, ext:225923753695, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-xwg6r.17a878ea3c2e2896" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 08 20:23:29 ingress-addon-legacy-592184 kubelet[1860]: W0108 20:23:29.545117    1860 pod_container_deletor.go:77] Container "bc3ee83ddb338cc483cb61e4b53dcb4d16673dfa37110e19dbf601ce60e26115" not found in pod's containers
	Jan 08 20:23:30 ingress-addon-legacy-592184 kubelet[1860]: I0108 20:23:30.501201    1860 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-sjld7" (UniqueName: "kubernetes.io/secret/7e5f914c-707e-42be-9857-f85263ded834-ingress-nginx-token-sjld7") pod "7e5f914c-707e-42be-9857-f85263ded834" (UID: "7e5f914c-707e-42be-9857-f85263ded834")
	Jan 08 20:23:30 ingress-addon-legacy-592184 kubelet[1860]: I0108 20:23:30.501299    1860 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7e5f914c-707e-42be-9857-f85263ded834-webhook-cert") pod "7e5f914c-707e-42be-9857-f85263ded834" (UID: "7e5f914c-707e-42be-9857-f85263ded834")
	Jan 08 20:23:30 ingress-addon-legacy-592184 kubelet[1860]: I0108 20:23:30.503312    1860 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e5f914c-707e-42be-9857-f85263ded834-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7e5f914c-707e-42be-9857-f85263ded834" (UID: "7e5f914c-707e-42be-9857-f85263ded834"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 20:23:30 ingress-addon-legacy-592184 kubelet[1860]: I0108 20:23:30.503739    1860 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e5f914c-707e-42be-9857-f85263ded834-ingress-nginx-token-sjld7" (OuterVolumeSpecName: "ingress-nginx-token-sjld7") pod "7e5f914c-707e-42be-9857-f85263ded834" (UID: "7e5f914c-707e-42be-9857-f85263ded834"). InnerVolumeSpecName "ingress-nginx-token-sjld7". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 20:23:30 ingress-addon-legacy-592184 kubelet[1860]: I0108 20:23:30.601796    1860 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7e5f914c-707e-42be-9857-f85263ded834-webhook-cert") on node "ingress-addon-legacy-592184" DevicePath ""
	Jan 08 20:23:30 ingress-addon-legacy-592184 kubelet[1860]: I0108 20:23:30.601879    1860 reconciler.go:319] Volume detached for volume "ingress-nginx-token-sjld7" (UniqueName: "kubernetes.io/secret/7e5f914c-707e-42be-9857-f85263ded834-ingress-nginx-token-sjld7") on node "ingress-addon-legacy-592184" DevicePath ""
	
	
	==> storage-provisioner [214771888ee154b96be636c0cf08d5e7b7a1ead793fbe02f2a6d873c28e906a9] <==
	I0108 20:20:15.961412       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 20:20:15.994641       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 20:20:15.994698       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 20:20:16.006193       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 20:20:16.006496       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-592184_fb87a50d-4961-46fe-92a1-0f6ff53f818a!
	I0108 20:20:16.008047       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bb755720-d2fb-4568-89b0-f0b32f5ee878", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-592184_fb87a50d-4961-46fe-92a1-0f6ff53f818a became leader
	I0108 20:20:16.106629       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-592184_fb87a50d-4961-46fe-92a1-0f6ff53f818a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-592184 -n ingress-addon-legacy-592184
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-592184 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (183.09s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209824 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209824 -- exec busybox-5bc68d56bd-6c6nv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209824 -- exec busybox-5bc68d56bd-6c6nv -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-209824 -- exec busybox-5bc68d56bd-6c6nv -- sh -c "ping -c 1 192.168.58.1": exit status 1 (202.428324ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-6c6nv): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209824 -- exec busybox-5bc68d56bd-v8fbl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209824 -- exec busybox-5bc68d56bd-v8fbl -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-209824 -- exec busybox-5bc68d56bd-v8fbl -- sh -c "ping -c 1 192.168.58.1": exit status 1 (209.625563ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-v8fbl): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-209824
helpers_test.go:235: (dbg) docker inspect multinode-209824:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8507f1719a09c808280058bb0847a0bae4d1da9b371ca43d4d04d28ab47955c8",
	        "Created": "2024-01-08T20:28:30.051242378Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 104498,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T20:28:30.385727017Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b531f61d9bfacb74e25e3fb20a2533b78fa4bf98ca9061755006074ff8c2c789",
	        "ResolvConfPath": "/var/lib/docker/containers/8507f1719a09c808280058bb0847a0bae4d1da9b371ca43d4d04d28ab47955c8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8507f1719a09c808280058bb0847a0bae4d1da9b371ca43d4d04d28ab47955c8/hostname",
	        "HostsPath": "/var/lib/docker/containers/8507f1719a09c808280058bb0847a0bae4d1da9b371ca43d4d04d28ab47955c8/hosts",
	        "LogPath": "/var/lib/docker/containers/8507f1719a09c808280058bb0847a0bae4d1da9b371ca43d4d04d28ab47955c8/8507f1719a09c808280058bb0847a0bae4d1da9b371ca43d4d04d28ab47955c8-json.log",
	        "Name": "/multinode-209824",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "multinode-209824:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-209824",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6ffa3c919621900e09e05769d2d23f59ecd99355e8e32019af467a00ec1a50d4-init/diff:/var/lib/docker/overlay2/2fffc6399525ec20cf4113360863206b9b39bff791b2620dc189d266ef6bfe67/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ffa3c919621900e09e05769d2d23f59ecd99355e8e32019af467a00ec1a50d4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ffa3c919621900e09e05769d2d23f59ecd99355e8e32019af467a00ec1a50d4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ffa3c919621900e09e05769d2d23f59ecd99355e8e32019af467a00ec1a50d4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-209824",
	                "Source": "/var/lib/docker/volumes/multinode-209824/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-209824",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-209824",
	                "name.minikube.sigs.k8s.io": "multinode-209824",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "55baff798f5d02abdcb6fb4bae83ae61c8dc7f0acdd4c6cb4c9518774ff78d47",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32844"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/55baff798f5d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-209824": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8507f1719a09",
	                        "multinode-209824"
	                    ],
	                    "NetworkID": "7a2eb071c85bbbe2aadd3c51f63e0d37fce08d38c8ad19b9d6e7d8feaa098448",
	                    "EndpointID": "58862e6191ebb368c654ca7caddff97b727bc1f907e7b9b71304a74fa940ee6f",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-209824 -n multinode-209824
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-209824 logs -n 25: (1.5806699s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-800737                           | mount-start-2-800737 | jenkins | v1.32.0 | 08 Jan 24 20:28 UTC | 08 Jan 24 20:28 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-800737 ssh -- ls                    | mount-start-2-800737 | jenkins | v1.32.0 | 08 Jan 24 20:28 UTC | 08 Jan 24 20:28 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-783339                           | mount-start-1-783339 | jenkins | v1.32.0 | 08 Jan 24 20:28 UTC | 08 Jan 24 20:28 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-800737 ssh -- ls                    | mount-start-2-800737 | jenkins | v1.32.0 | 08 Jan 24 20:28 UTC | 08 Jan 24 20:28 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-800737                           | mount-start-2-800737 | jenkins | v1.32.0 | 08 Jan 24 20:28 UTC | 08 Jan 24 20:28 UTC |
	| start   | -p mount-start-2-800737                           | mount-start-2-800737 | jenkins | v1.32.0 | 08 Jan 24 20:28 UTC | 08 Jan 24 20:28 UTC |
	| ssh     | mount-start-2-800737 ssh -- ls                    | mount-start-2-800737 | jenkins | v1.32.0 | 08 Jan 24 20:28 UTC | 08 Jan 24 20:28 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-800737                           | mount-start-2-800737 | jenkins | v1.32.0 | 08 Jan 24 20:28 UTC | 08 Jan 24 20:28 UTC |
	| delete  | -p mount-start-1-783339                           | mount-start-1-783339 | jenkins | v1.32.0 | 08 Jan 24 20:28 UTC | 08 Jan 24 20:28 UTC |
	| start   | -p multinode-209824                               | multinode-209824     | jenkins | v1.32.0 | 08 Jan 24 20:28 UTC | 08 Jan 24 20:30 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-209824 -- apply -f                   | multinode-209824     | jenkins | v1.32.0 | 08 Jan 24 20:30 UTC | 08 Jan 24 20:30 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-209824 -- rollout                    | multinode-209824     | jenkins | v1.32.0 | 08 Jan 24 20:30 UTC | 08 Jan 24 20:30 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-209824 -- get pods -o                | multinode-209824     | jenkins | v1.32.0 | 08 Jan 24 20:30 UTC | 08 Jan 24 20:30 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-209824 -- get pods -o                | multinode-209824     | jenkins | v1.32.0 | 08 Jan 24 20:30 UTC | 08 Jan 24 20:30 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-209824 -- exec                       | multinode-209824     | jenkins | v1.32.0 | 08 Jan 24 20:30 UTC | 08 Jan 24 20:30 UTC |
	|         | busybox-5bc68d56bd-6c6nv --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-209824 -- exec                       | multinode-209824     | jenkins | v1.32.0 | 08 Jan 24 20:30 UTC | 08 Jan 24 20:30 UTC |
	|         | busybox-5bc68d56bd-v8fbl --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-209824 -- exec                       | multinode-209824     | jenkins | v1.32.0 | 08 Jan 24 20:30 UTC | 08 Jan 24 20:30 UTC |
	|         | busybox-5bc68d56bd-6c6nv --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-209824 -- exec                       | multinode-209824     | jenkins | v1.32.0 | 08 Jan 24 20:30 UTC | 08 Jan 24 20:30 UTC |
	|         | busybox-5bc68d56bd-v8fbl --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-209824 -- exec                       | multinode-209824     | jenkins | v1.32.0 | 08 Jan 24 20:30 UTC | 08 Jan 24 20:30 UTC |
	|         | busybox-5bc68d56bd-6c6nv -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-209824 -- exec                       | multinode-209824     | jenkins | v1.32.0 | 08 Jan 24 20:30 UTC | 08 Jan 24 20:30 UTC |
	|         | busybox-5bc68d56bd-v8fbl -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-209824 -- get pods -o                | multinode-209824     | jenkins | v1.32.0 | 08 Jan 24 20:30 UTC | 08 Jan 24 20:30 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-209824 -- exec                       | multinode-209824     | jenkins | v1.32.0 | 08 Jan 24 20:30 UTC | 08 Jan 24 20:30 UTC |
	|         | busybox-5bc68d56bd-6c6nv                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-209824 -- exec                       | multinode-209824     | jenkins | v1.32.0 | 08 Jan 24 20:30 UTC |                     |
	|         | busybox-5bc68d56bd-6c6nv -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-209824 -- exec                       | multinode-209824     | jenkins | v1.32.0 | 08 Jan 24 20:30 UTC | 08 Jan 24 20:30 UTC |
	|         | busybox-5bc68d56bd-v8fbl                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-209824 -- exec                       | multinode-209824     | jenkins | v1.32.0 | 08 Jan 24 20:30 UTC |                     |
	|         | busybox-5bc68d56bd-v8fbl -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:28:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:28:23.528767  103895 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:28:23.529138  103895 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:28:23.529150  103895 out.go:309] Setting ErrFile to fd 2...
	I0108 20:28:23.529157  103895 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:28:23.529405  103895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-11003/.minikube/bin
	I0108 20:28:23.530194  103895 out.go:303] Setting JSON to false
	I0108 20:28:23.532115  103895 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4230,"bootTime":1704741474,"procs":676,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:28:23.532212  103895 start.go:138] virtualization: kvm guest
	I0108 20:28:23.535222  103895 out.go:177] * [multinode-209824] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:28:23.536948  103895 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:28:23.536994  103895 notify.go:220] Checking for updates...
	I0108 20:28:23.538784  103895 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:28:23.540603  103895 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-11003/kubeconfig
	I0108 20:28:23.542318  103895 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-11003/.minikube
	I0108 20:28:23.544021  103895 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 20:28:23.545797  103895 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:28:23.547677  103895 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:28:23.571183  103895 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:28:23.571321  103895 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:28:23.632763  103895 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-08 20:28:23.622817838 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:28:23.632868  103895 docker.go:295] overlay module found
	I0108 20:28:23.635087  103895 out.go:177] * Using the docker driver based on user configuration
	I0108 20:28:23.636832  103895 start.go:298] selected driver: docker
	I0108 20:28:23.636854  103895 start.go:902] validating driver "docker" against <nil>
	I0108 20:28:23.636868  103895 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:28:23.637733  103895 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:28:23.701130  103895 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-08 20:28:23.689522189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:28:23.701313  103895 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 20:28:23.701553  103895 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 20:28:23.703696  103895 out.go:177] * Using Docker driver with root privileges
	I0108 20:28:23.705303  103895 cni.go:84] Creating CNI manager for ""
	I0108 20:28:23.705332  103895 cni.go:136] 0 nodes found, recommending kindnet
	I0108 20:28:23.705343  103895 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 20:28:23.705354  103895 start_flags.go:323] config:
	{Name:multinode-209824 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-209824 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:28:23.707177  103895 out.go:177] * Starting control plane node multinode-209824 in cluster multinode-209824
	I0108 20:28:23.708810  103895 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 20:28:23.710422  103895 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:28:23.711833  103895 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:28:23.711891  103895 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17907-11003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 20:28:23.711902  103895 cache.go:56] Caching tarball of preloaded images
	I0108 20:28:23.711924  103895 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0108 20:28:23.712049  103895 preload.go:174] Found /home/jenkins/minikube-integration/17907-11003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 20:28:23.712065  103895 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 20:28:23.712529  103895 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/config.json ...
	I0108 20:28:23.712560  103895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/config.json: {Name:mk76bfe50b48718e951c304ce1bd37d49d753a37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:28:23.731224  103895 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0108 20:28:23.731271  103895 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0108 20:28:23.731301  103895 cache.go:194] Successfully downloaded all kic artifacts
	I0108 20:28:23.731353  103895 start.go:365] acquiring machines lock for multinode-209824: {Name:mk70fbe1d2c173b02177bb349ca6804aa7db4a01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:28:23.731562  103895 start.go:369] acquired machines lock for "multinode-209824" in 148.736µs
	I0108 20:28:23.731605  103895 start.go:93] Provisioning new machine with config: &{Name:multinode-209824 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-209824 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 20:28:23.731683  103895 start.go:125] createHost starting for "" (driver="docker")
	I0108 20:28:23.735254  103895 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0108 20:28:23.735587  103895 start.go:159] libmachine.API.Create for "multinode-209824" (driver="docker")
	I0108 20:28:23.735622  103895 client.go:168] LocalClient.Create starting
	I0108 20:28:23.735690  103895 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem
	I0108 20:28:23.735738  103895 main.go:141] libmachine: Decoding PEM data...
	I0108 20:28:23.735758  103895 main.go:141] libmachine: Parsing certificate...
	I0108 20:28:23.735809  103895 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17907-11003/.minikube/certs/cert.pem
	I0108 20:28:23.735829  103895 main.go:141] libmachine: Decoding PEM data...
	I0108 20:28:23.735838  103895 main.go:141] libmachine: Parsing certificate...
	I0108 20:28:23.736197  103895 cli_runner.go:164] Run: docker network inspect multinode-209824 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 20:28:23.755146  103895 cli_runner.go:211] docker network inspect multinode-209824 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 20:28:23.755212  103895 network_create.go:281] running [docker network inspect multinode-209824] to gather additional debugging logs...
	I0108 20:28:23.755229  103895 cli_runner.go:164] Run: docker network inspect multinode-209824
	W0108 20:28:23.771454  103895 cli_runner.go:211] docker network inspect multinode-209824 returned with exit code 1
	I0108 20:28:23.771502  103895 network_create.go:284] error running [docker network inspect multinode-209824]: docker network inspect multinode-209824: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-209824 not found
	I0108 20:28:23.771518  103895 network_create.go:286] output of [docker network inspect multinode-209824]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-209824 not found
	
	** /stderr **
	I0108 20:28:23.771660  103895 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:28:23.788680  103895 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dd87632bcb89 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:f4:00:f4:b3} reservation:<nil>}
	I0108 20:28:23.789215  103895 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002103140}
	I0108 20:28:23.789255  103895 network_create.go:124] attempt to create docker network multinode-209824 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0108 20:28:23.789312  103895 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-209824 multinode-209824
	I0108 20:28:23.850729  103895 network_create.go:108] docker network multinode-209824 192.168.58.0/24 created
	I0108 20:28:23.850780  103895 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-209824" container
	I0108 20:28:23.850861  103895 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 20:28:23.869716  103895 cli_runner.go:164] Run: docker volume create multinode-209824 --label name.minikube.sigs.k8s.io=multinode-209824 --label created_by.minikube.sigs.k8s.io=true
	I0108 20:28:23.891060  103895 oci.go:103] Successfully created a docker volume multinode-209824
	I0108 20:28:23.891180  103895 cli_runner.go:164] Run: docker run --rm --name multinode-209824-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-209824 --entrypoint /usr/bin/test -v multinode-209824:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I0108 20:28:24.443636  103895 oci.go:107] Successfully prepared a docker volume multinode-209824
	I0108 20:28:24.443713  103895 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:28:24.443743  103895 kic.go:194] Starting extracting preloaded images to volume ...
	I0108 20:28:24.443827  103895 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17907-11003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-209824:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 20:28:29.969894  103895 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17907-11003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-209824:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (5.525985922s)
	I0108 20:28:29.969948  103895 kic.go:203] duration metric: took 5.526199 seconds to extract preloaded images to volume
	W0108 20:28:29.970145  103895 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 20:28:29.970258  103895 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 20:28:30.032532  103895 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-209824 --name multinode-209824 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-209824 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-209824 --network multinode-209824 --ip 192.168.58.2 --volume multinode-209824:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0108 20:28:30.395913  103895 cli_runner.go:164] Run: docker container inspect multinode-209824 --format={{.State.Running}}
	I0108 20:28:30.416210  103895 cli_runner.go:164] Run: docker container inspect multinode-209824 --format={{.State.Status}}
	I0108 20:28:30.437629  103895 cli_runner.go:164] Run: docker exec multinode-209824 stat /var/lib/dpkg/alternatives/iptables
	I0108 20:28:30.515204  103895 oci.go:144] the created container "multinode-209824" has a running status.
	I0108 20:28:30.515237  103895 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17907-11003/.minikube/machines/multinode-209824/id_rsa...
	I0108 20:28:30.629081  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/machines/multinode-209824/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0108 20:28:30.629146  103895 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17907-11003/.minikube/machines/multinode-209824/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 20:28:30.653390  103895 cli_runner.go:164] Run: docker container inspect multinode-209824 --format={{.State.Status}}
	I0108 20:28:30.673447  103895 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 20:28:30.673488  103895 kic_runner.go:114] Args: [docker exec --privileged multinode-209824 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 20:28:30.766436  103895 cli_runner.go:164] Run: docker container inspect multinode-209824 --format={{.State.Status}}
	I0108 20:28:30.788768  103895 machine.go:88] provisioning docker machine ...
	I0108 20:28:30.788808  103895 ubuntu.go:169] provisioning hostname "multinode-209824"
	I0108 20:28:30.788879  103895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-209824
	I0108 20:28:30.813579  103895 main.go:141] libmachine: Using SSH client type: native
	I0108 20:28:30.814192  103895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0108 20:28:30.814222  103895 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-209824 && echo "multinode-209824" | sudo tee /etc/hostname
	I0108 20:28:30.815164  103895 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42932->127.0.0.1:32847: read: connection reset by peer
	I0108 20:28:33.951446  103895 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-209824
	
	I0108 20:28:33.951553  103895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-209824
	I0108 20:28:33.970948  103895 main.go:141] libmachine: Using SSH client type: native
	I0108 20:28:33.971388  103895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0108 20:28:33.971412  103895 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-209824' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-209824/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-209824' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:28:34.101202  103895 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:28:34.101266  103895 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17907-11003/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-11003/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-11003/.minikube}
	I0108 20:28:34.101304  103895 ubuntu.go:177] setting up certificates
	I0108 20:28:34.101324  103895 provision.go:83] configureAuth start
	I0108 20:28:34.101414  103895 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-209824
	I0108 20:28:34.119658  103895 provision.go:138] copyHostCerts
	I0108 20:28:34.119713  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17907-11003/.minikube/ca.pem
	I0108 20:28:34.119752  103895 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-11003/.minikube/ca.pem, removing ...
	I0108 20:28:34.119759  103895 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-11003/.minikube/ca.pem
	I0108 20:28:34.119845  103895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-11003/.minikube/ca.pem (1078 bytes)
	I0108 20:28:34.119924  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17907-11003/.minikube/cert.pem
	I0108 20:28:34.119955  103895 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-11003/.minikube/cert.pem, removing ...
	I0108 20:28:34.119962  103895 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-11003/.minikube/cert.pem
	I0108 20:28:34.120002  103895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-11003/.minikube/cert.pem (1123 bytes)
	I0108 20:28:34.120057  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17907-11003/.minikube/key.pem
	I0108 20:28:34.120085  103895 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-11003/.minikube/key.pem, removing ...
	I0108 20:28:34.120091  103895 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-11003/.minikube/key.pem
	I0108 20:28:34.120117  103895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-11003/.minikube/key.pem (1679 bytes)
	I0108 20:28:34.120167  103895 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-11003/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca-key.pem org=jenkins.multinode-209824 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-209824]
	I0108 20:28:34.266626  103895 provision.go:172] copyRemoteCerts
	I0108 20:28:34.266686  103895 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:28:34.266722  103895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-209824
	I0108 20:28:34.285650  103895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/multinode-209824/id_rsa Username:docker}
	I0108 20:28:34.376994  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 20:28:34.377096  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 20:28:34.403412  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 20:28:34.403498  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0108 20:28:34.428685  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 20:28:34.428795  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 20:28:34.455802  103895 provision.go:86] duration metric: configureAuth took 354.463055ms
	I0108 20:28:34.455833  103895 ubuntu.go:193] setting minikube options for container-runtime
	I0108 20:28:34.456024  103895 config.go:182] Loaded profile config "multinode-209824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:28:34.456155  103895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-209824
	I0108 20:28:34.474222  103895 main.go:141] libmachine: Using SSH client type: native
	I0108 20:28:34.474596  103895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0108 20:28:34.474616  103895 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 20:28:34.700324  103895 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 20:28:34.700362  103895 machine.go:91] provisioned docker machine in 3.911567668s
	I0108 20:28:34.700377  103895 client.go:171] LocalClient.Create took 10.964743807s
	I0108 20:28:34.700403  103895 start.go:167] duration metric: libmachine.API.Create for "multinode-209824" took 10.96481424s
	I0108 20:28:34.700418  103895 start.go:300] post-start starting for "multinode-209824" (driver="docker")
	I0108 20:28:34.700436  103895 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:28:34.700522  103895 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:28:34.700578  103895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-209824
	I0108 20:28:34.720124  103895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/multinode-209824/id_rsa Username:docker}
	I0108 20:28:34.813674  103895 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:28:34.817521  103895 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0108 20:28:34.817551  103895 command_runner.go:130] > NAME="Ubuntu"
	I0108 20:28:34.817559  103895 command_runner.go:130] > VERSION_ID="22.04"
	I0108 20:28:34.817565  103895 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0108 20:28:34.817614  103895 command_runner.go:130] > VERSION_CODENAME=jammy
	I0108 20:28:34.817630  103895 command_runner.go:130] > ID=ubuntu
	I0108 20:28:34.817640  103895 command_runner.go:130] > ID_LIKE=debian
	I0108 20:28:34.817648  103895 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0108 20:28:34.817653  103895 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0108 20:28:34.817662  103895 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0108 20:28:34.817671  103895 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0108 20:28:34.817679  103895 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0108 20:28:34.817767  103895 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 20:28:34.817795  103895 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 20:28:34.817813  103895 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 20:28:34.817827  103895 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 20:28:34.817847  103895 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-11003/.minikube/addons for local assets ...
	I0108 20:28:34.817928  103895 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-11003/.minikube/files for local assets ...
	I0108 20:28:34.818048  103895 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/ssl/certs/177612.pem -> 177612.pem in /etc/ssl/certs
	I0108 20:28:34.818061  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/ssl/certs/177612.pem -> /etc/ssl/certs/177612.pem
	I0108 20:28:34.818203  103895 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 20:28:34.827231  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/ssl/certs/177612.pem --> /etc/ssl/certs/177612.pem (1708 bytes)
	I0108 20:28:34.853556  103895 start.go:303] post-start completed in 153.119336ms
	I0108 20:28:34.853891  103895 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-209824
	I0108 20:28:34.873252  103895 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/config.json ...
	I0108 20:28:34.873547  103895 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:28:34.873594  103895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-209824
	I0108 20:28:34.893799  103895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/multinode-209824/id_rsa Username:docker}
	I0108 20:28:34.980580  103895 command_runner.go:130] > 20%!
	(MISSING)I0108 20:28:34.980663  103895 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 20:28:34.985242  103895 command_runner.go:130] > 234G
	I0108 20:28:34.985480  103895 start.go:128] duration metric: createHost completed in 11.253782277s
	I0108 20:28:34.985510  103895 start.go:83] releasing machines lock for "multinode-209824", held for 11.253929768s
	I0108 20:28:34.985720  103895 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-209824
	I0108 20:28:35.004653  103895 ssh_runner.go:195] Run: cat /version.json
	I0108 20:28:35.004712  103895 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:28:35.004748  103895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-209824
	I0108 20:28:35.004765  103895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-209824
	I0108 20:28:35.025611  103895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/multinode-209824/id_rsa Username:docker}
	I0108 20:28:35.025852  103895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/multinode-209824/id_rsa Username:docker}
	I0108 20:28:35.204199  103895 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 20:28:35.204323  103895 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1703498848-17857", "minikube_version": "v1.32.0", "commit": "d18dc8d014b22564d2860ddb02a821a21df70433"}
	I0108 20:28:35.204440  103895 ssh_runner.go:195] Run: systemctl --version
	I0108 20:28:35.209487  103895 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I0108 20:28:35.209535  103895 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0108 20:28:35.209651  103895 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 20:28:35.352632  103895 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 20:28:35.356787  103895 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0108 20:28:35.356821  103895 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0108 20:28:35.356833  103895 command_runner.go:130] > Device: 37h/55d	Inode: 570039      Links: 1
	I0108 20:28:35.356843  103895 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 20:28:35.356854  103895 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0108 20:28:35.356865  103895 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0108 20:28:35.356875  103895 command_runner.go:130] > Change: 2024-01-08 20:09:52.936393328 +0000
	I0108 20:28:35.356895  103895 command_runner.go:130] >  Birth: 2024-01-08 20:09:52.936393328 +0000
	I0108 20:28:35.357080  103895 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:28:35.377988  103895 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 20:28:35.378103  103895 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:28:35.409240  103895 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0108 20:28:35.409311  103895 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0108 20:28:35.409324  103895 start.go:475] detecting cgroup driver to use...
	I0108 20:28:35.409371  103895 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 20:28:35.409442  103895 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 20:28:35.427455  103895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 20:28:35.438686  103895 docker.go:217] disabling cri-docker service (if available) ...
	I0108 20:28:35.438741  103895 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 20:28:35.453878  103895 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 20:28:35.468858  103895 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 20:28:35.548720  103895 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 20:28:35.638019  103895 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0108 20:28:35.638065  103895 docker.go:233] disabling docker service ...
	I0108 20:28:35.638125  103895 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:28:35.658207  103895 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:28:35.672094  103895 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:28:35.752461  103895 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0108 20:28:35.752550  103895 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:28:35.764124  103895 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0108 20:28:35.840372  103895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:28:35.852206  103895 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:28:35.868528  103895 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0108 20:28:35.869657  103895 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 20:28:35.869726  103895 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:28:35.880617  103895 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 20:28:35.880684  103895 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:28:35.892590  103895 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:28:35.903276  103895 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:28:35.914682  103895 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 20:28:35.925888  103895 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 20:28:35.935509  103895 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0108 20:28:35.936301  103895 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 20:28:35.944685  103895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:28:36.025610  103895 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 20:28:36.145573  103895 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 20:28:36.145652  103895 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 20:28:36.149690  103895 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0108 20:28:36.149713  103895 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 20:28:36.149719  103895 command_runner.go:130] > Device: 40h/64d	Inode: 190         Links: 1
	I0108 20:28:36.149726  103895 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 20:28:36.149731  103895 command_runner.go:130] > Access: 2024-01-08 20:28:36.130829527 +0000
	I0108 20:28:36.149736  103895 command_runner.go:130] > Modify: 2024-01-08 20:28:36.130829527 +0000
	I0108 20:28:36.149741  103895 command_runner.go:130] > Change: 2024-01-08 20:28:36.130829527 +0000
	I0108 20:28:36.149745  103895 command_runner.go:130] >  Birth: -
	I0108 20:28:36.149765  103895 start.go:543] Will wait 60s for crictl version
	I0108 20:28:36.149802  103895 ssh_runner.go:195] Run: which crictl
	I0108 20:28:36.153170  103895 command_runner.go:130] > /usr/bin/crictl
	I0108 20:28:36.153317  103895 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 20:28:36.187376  103895 command_runner.go:130] > Version:  0.1.0
	I0108 20:28:36.187402  103895 command_runner.go:130] > RuntimeName:  cri-o
	I0108 20:28:36.187410  103895 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0108 20:28:36.187419  103895 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 20:28:36.190537  103895 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0108 20:28:36.190650  103895 ssh_runner.go:195] Run: crio --version
	I0108 20:28:36.227391  103895 command_runner.go:130] > crio version 1.24.6
	I0108 20:28:36.227413  103895 command_runner.go:130] > Version:          1.24.6
	I0108 20:28:36.227437  103895 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0108 20:28:36.227441  103895 command_runner.go:130] > GitTreeState:     clean
	I0108 20:28:36.227447  103895 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0108 20:28:36.227452  103895 command_runner.go:130] > GoVersion:        go1.18.2
	I0108 20:28:36.227456  103895 command_runner.go:130] > Compiler:         gc
	I0108 20:28:36.227461  103895 command_runner.go:130] > Platform:         linux/amd64
	I0108 20:28:36.227466  103895 command_runner.go:130] > Linkmode:         dynamic
	I0108 20:28:36.227473  103895 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 20:28:36.227481  103895 command_runner.go:130] > SeccompEnabled:   true
	I0108 20:28:36.227485  103895 command_runner.go:130] > AppArmorEnabled:  false
	I0108 20:28:36.229824  103895 ssh_runner.go:195] Run: crio --version
	I0108 20:28:36.264871  103895 command_runner.go:130] > crio version 1.24.6
	I0108 20:28:36.264894  103895 command_runner.go:130] > Version:          1.24.6
	I0108 20:28:36.264913  103895 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0108 20:28:36.264920  103895 command_runner.go:130] > GitTreeState:     clean
	I0108 20:28:36.264928  103895 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0108 20:28:36.264933  103895 command_runner.go:130] > GoVersion:        go1.18.2
	I0108 20:28:36.264937  103895 command_runner.go:130] > Compiler:         gc
	I0108 20:28:36.264941  103895 command_runner.go:130] > Platform:         linux/amd64
	I0108 20:28:36.264948  103895 command_runner.go:130] > Linkmode:         dynamic
	I0108 20:28:36.264956  103895 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 20:28:36.264963  103895 command_runner.go:130] > SeccompEnabled:   true
	I0108 20:28:36.264967  103895 command_runner.go:130] > AppArmorEnabled:  false
	I0108 20:28:36.270342  103895 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0108 20:28:36.272202  103895 cli_runner.go:164] Run: docker network inspect multinode-209824 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:28:36.294108  103895 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0108 20:28:36.297638  103895 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:28:36.310061  103895 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:28:36.310121  103895 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:28:36.370824  103895 command_runner.go:130] > {
	I0108 20:28:36.370861  103895 command_runner.go:130] >   "images": [
	I0108 20:28:36.370868  103895 command_runner.go:130] >     {
	I0108 20:28:36.370881  103895 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0108 20:28:36.370897  103895 command_runner.go:130] >       "repoTags": [
	I0108 20:28:36.370907  103895 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0108 20:28:36.370913  103895 command_runner.go:130] >       ],
	I0108 20:28:36.370921  103895 command_runner.go:130] >       "repoDigests": [
	I0108 20:28:36.370939  103895 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0108 20:28:36.370955  103895 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0108 20:28:36.370964  103895 command_runner.go:130] >       ],
	I0108 20:28:36.370975  103895 command_runner.go:130] >       "size": "65258016",
	I0108 20:28:36.370984  103895 command_runner.go:130] >       "uid": null,
	I0108 20:28:36.370995  103895 command_runner.go:130] >       "username": "",
	I0108 20:28:36.371012  103895 command_runner.go:130] >       "spec": null,
	I0108 20:28:36.371023  103895 command_runner.go:130] >       "pinned": false
	I0108 20:28:36.371033  103895 command_runner.go:130] >     },
	I0108 20:28:36.371043  103895 command_runner.go:130] >     {
	I0108 20:28:36.371056  103895 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0108 20:28:36.371068  103895 command_runner.go:130] >       "repoTags": [
	I0108 20:28:36.371080  103895 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0108 20:28:36.371089  103895 command_runner.go:130] >       ],
	I0108 20:28:36.371100  103895 command_runner.go:130] >       "repoDigests": [
	I0108 20:28:36.371117  103895 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0108 20:28:36.371133  103895 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0108 20:28:36.371144  103895 command_runner.go:130] >       ],
	I0108 20:28:36.371158  103895 command_runner.go:130] >       "size": "31470524",
	I0108 20:28:36.371168  103895 command_runner.go:130] >       "uid": null,
	I0108 20:28:36.371178  103895 command_runner.go:130] >       "username": "",
	I0108 20:28:36.371186  103895 command_runner.go:130] >       "spec": null,
	I0108 20:28:36.371197  103895 command_runner.go:130] >       "pinned": false
	I0108 20:28:36.371207  103895 command_runner.go:130] >     },
	I0108 20:28:36.371217  103895 command_runner.go:130] >     {
	I0108 20:28:36.371230  103895 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0108 20:28:36.371240  103895 command_runner.go:130] >       "repoTags": [
	I0108 20:28:36.371251  103895 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0108 20:28:36.371260  103895 command_runner.go:130] >       ],
	I0108 20:28:36.371266  103895 command_runner.go:130] >       "repoDigests": [
	I0108 20:28:36.371280  103895 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0108 20:28:36.371294  103895 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0108 20:28:36.371306  103895 command_runner.go:130] >       ],
	I0108 20:28:36.371315  103895 command_runner.go:130] >       "size": "53621675",
	I0108 20:28:36.371324  103895 command_runner.go:130] >       "uid": null,
	I0108 20:28:36.371332  103895 command_runner.go:130] >       "username": "",
	I0108 20:28:36.371340  103895 command_runner.go:130] >       "spec": null,
	I0108 20:28:36.371346  103895 command_runner.go:130] >       "pinned": false
	I0108 20:28:36.371354  103895 command_runner.go:130] >     },
	I0108 20:28:36.371381  103895 command_runner.go:130] >     {
	I0108 20:28:36.371390  103895 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0108 20:28:36.371398  103895 command_runner.go:130] >       "repoTags": [
	I0108 20:28:36.371406  103895 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0108 20:28:36.371414  103895 command_runner.go:130] >       ],
	I0108 20:28:36.371423  103895 command_runner.go:130] >       "repoDigests": [
	I0108 20:28:36.371436  103895 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0108 20:28:36.371448  103895 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0108 20:28:36.371466  103895 command_runner.go:130] >       ],
	I0108 20:28:36.371481  103895 command_runner.go:130] >       "size": "295456551",
	I0108 20:28:36.371490  103895 command_runner.go:130] >       "uid": {
	I0108 20:28:36.371503  103895 command_runner.go:130] >         "value": "0"
	I0108 20:28:36.371510  103895 command_runner.go:130] >       },
	I0108 20:28:36.371520  103895 command_runner.go:130] >       "username": "",
	I0108 20:28:36.371528  103895 command_runner.go:130] >       "spec": null,
	I0108 20:28:36.371534  103895 command_runner.go:130] >       "pinned": false
	I0108 20:28:36.371542  103895 command_runner.go:130] >     },
	I0108 20:28:36.371551  103895 command_runner.go:130] >     {
	I0108 20:28:36.371563  103895 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0108 20:28:36.371578  103895 command_runner.go:130] >       "repoTags": [
	I0108 20:28:36.371588  103895 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0108 20:28:36.371596  103895 command_runner.go:130] >       ],
	I0108 20:28:36.371606  103895 command_runner.go:130] >       "repoDigests": [
	I0108 20:28:36.371619  103895 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0108 20:28:36.371633  103895 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0108 20:28:36.371642  103895 command_runner.go:130] >       ],
	I0108 20:28:36.371653  103895 command_runner.go:130] >       "size": "127226832",
	I0108 20:28:36.371661  103895 command_runner.go:130] >       "uid": {
	I0108 20:28:36.371670  103895 command_runner.go:130] >         "value": "0"
	I0108 20:28:36.371683  103895 command_runner.go:130] >       },
	I0108 20:28:36.371693  103895 command_runner.go:130] >       "username": "",
	I0108 20:28:36.371701  103895 command_runner.go:130] >       "spec": null,
	I0108 20:28:36.371708  103895 command_runner.go:130] >       "pinned": false
	I0108 20:28:36.371716  103895 command_runner.go:130] >     },
	I0108 20:28:36.371725  103895 command_runner.go:130] >     {
	I0108 20:28:36.371736  103895 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0108 20:28:36.371745  103895 command_runner.go:130] >       "repoTags": [
	I0108 20:28:36.371756  103895 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0108 20:28:36.371765  103895 command_runner.go:130] >       ],
	I0108 20:28:36.371774  103895 command_runner.go:130] >       "repoDigests": [
	I0108 20:28:36.371789  103895 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0108 20:28:36.371804  103895 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0108 20:28:36.371813  103895 command_runner.go:130] >       ],
	I0108 20:28:36.371824  103895 command_runner.go:130] >       "size": "123261750",
	I0108 20:28:36.371835  103895 command_runner.go:130] >       "uid": {
	I0108 20:28:36.371844  103895 command_runner.go:130] >         "value": "0"
	I0108 20:28:36.371853  103895 command_runner.go:130] >       },
	I0108 20:28:36.371870  103895 command_runner.go:130] >       "username": "",
	I0108 20:28:36.371881  103895 command_runner.go:130] >       "spec": null,
	I0108 20:28:36.371890  103895 command_runner.go:130] >       "pinned": false
	I0108 20:28:36.371899  103895 command_runner.go:130] >     },
	I0108 20:28:36.371907  103895 command_runner.go:130] >     {
	I0108 20:28:36.371920  103895 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0108 20:28:36.371929  103895 command_runner.go:130] >       "repoTags": [
	I0108 20:28:36.371937  103895 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0108 20:28:36.371945  103895 command_runner.go:130] >       ],
	I0108 20:28:36.371957  103895 command_runner.go:130] >       "repoDigests": [
	I0108 20:28:36.371971  103895 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0108 20:28:36.371985  103895 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0108 20:28:36.371994  103895 command_runner.go:130] >       ],
	I0108 20:28:36.372004  103895 command_runner.go:130] >       "size": "74749335",
	I0108 20:28:36.372013  103895 command_runner.go:130] >       "uid": null,
	I0108 20:28:36.372030  103895 command_runner.go:130] >       "username": "",
	I0108 20:28:36.372040  103895 command_runner.go:130] >       "spec": null,
	I0108 20:28:36.372051  103895 command_runner.go:130] >       "pinned": false
	I0108 20:28:36.372062  103895 command_runner.go:130] >     },
	I0108 20:28:36.372071  103895 command_runner.go:130] >     {
	I0108 20:28:36.372083  103895 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0108 20:28:36.372092  103895 command_runner.go:130] >       "repoTags": [
	I0108 20:28:36.372103  103895 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0108 20:28:36.372112  103895 command_runner.go:130] >       ],
	I0108 20:28:36.372121  103895 command_runner.go:130] >       "repoDigests": [
	I0108 20:28:36.372164  103895 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0108 20:28:36.372182  103895 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0108 20:28:36.372189  103895 command_runner.go:130] >       ],
	I0108 20:28:36.372196  103895 command_runner.go:130] >       "size": "61551410",
	I0108 20:28:36.372206  103895 command_runner.go:130] >       "uid": {
	I0108 20:28:36.372215  103895 command_runner.go:130] >         "value": "0"
	I0108 20:28:36.372221  103895 command_runner.go:130] >       },
	I0108 20:28:36.372232  103895 command_runner.go:130] >       "username": "",
	I0108 20:28:36.372242  103895 command_runner.go:130] >       "spec": null,
	I0108 20:28:36.372253  103895 command_runner.go:130] >       "pinned": false
	I0108 20:28:36.372261  103895 command_runner.go:130] >     },
	I0108 20:28:36.372274  103895 command_runner.go:130] >     {
	I0108 20:28:36.372287  103895 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0108 20:28:36.372297  103895 command_runner.go:130] >       "repoTags": [
	I0108 20:28:36.372308  103895 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0108 20:28:36.372316  103895 command_runner.go:130] >       ],
	I0108 20:28:36.372322  103895 command_runner.go:130] >       "repoDigests": [
	I0108 20:28:36.372337  103895 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0108 20:28:36.372350  103895 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0108 20:28:36.372358  103895 command_runner.go:130] >       ],
	I0108 20:28:36.372364  103895 command_runner.go:130] >       "size": "750414",
	I0108 20:28:36.372372  103895 command_runner.go:130] >       "uid": {
	I0108 20:28:36.372383  103895 command_runner.go:130] >         "value": "65535"
	I0108 20:28:36.372393  103895 command_runner.go:130] >       },
	I0108 20:28:36.372403  103895 command_runner.go:130] >       "username": "",
	I0108 20:28:36.372414  103895 command_runner.go:130] >       "spec": null,
	I0108 20:28:36.372424  103895 command_runner.go:130] >       "pinned": false
	I0108 20:28:36.372433  103895 command_runner.go:130] >     }
	I0108 20:28:36.372442  103895 command_runner.go:130] >   ]
	I0108 20:28:36.372455  103895 command_runner.go:130] > }
	I0108 20:28:36.374694  103895 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 20:28:36.374727  103895 crio.go:415] Images already preloaded, skipping extraction
	I0108 20:28:36.374801  103895 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:28:36.408157  103895 command_runner.go:130] > {
	I0108 20:28:36.408178  103895 command_runner.go:130] >   "images": [
	I0108 20:28:36.408183  103895 command_runner.go:130] >     {
	I0108 20:28:36.408190  103895 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0108 20:28:36.408196  103895 command_runner.go:130] >       "repoTags": [
	I0108 20:28:36.408206  103895 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0108 20:28:36.408212  103895 command_runner.go:130] >       ],
	I0108 20:28:36.408220  103895 command_runner.go:130] >       "repoDigests": [
	I0108 20:28:36.408242  103895 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0108 20:28:36.408259  103895 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0108 20:28:36.408267  103895 command_runner.go:130] >       ],
	I0108 20:28:36.408272  103895 command_runner.go:130] >       "size": "65258016",
	I0108 20:28:36.408278  103895 command_runner.go:130] >       "uid": null,
	I0108 20:28:36.408282  103895 command_runner.go:130] >       "username": "",
	I0108 20:28:36.408290  103895 command_runner.go:130] >       "spec": null,
	I0108 20:28:36.408294  103895 command_runner.go:130] >       "pinned": false
	I0108 20:28:36.408300  103895 command_runner.go:130] >     },
	I0108 20:28:36.408306  103895 command_runner.go:130] >     {
	I0108 20:28:36.408312  103895 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0108 20:28:36.408316  103895 command_runner.go:130] >       "repoTags": [
	I0108 20:28:36.408328  103895 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0108 20:28:36.408332  103895 command_runner.go:130] >       ],
	I0108 20:28:36.408336  103895 command_runner.go:130] >       "repoDigests": [
	I0108 20:28:36.408343  103895 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0108 20:28:36.408350  103895 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0108 20:28:36.408356  103895 command_runner.go:130] >       ],
	I0108 20:28:36.408365  103895 command_runner.go:130] >       "size": "31470524",
	I0108 20:28:36.408372  103895 command_runner.go:130] >       "uid": null,
	I0108 20:28:36.408376  103895 command_runner.go:130] >       "username": "",
	I0108 20:28:36.408384  103895 command_runner.go:130] >       "spec": null,
	I0108 20:28:36.408388  103895 command_runner.go:130] >       "pinned": false
	I0108 20:28:36.408392  103895 command_runner.go:130] >     },
	I0108 20:28:36.408396  103895 command_runner.go:130] >     {
	I0108 20:28:36.408404  103895 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0108 20:28:36.408411  103895 command_runner.go:130] >       "repoTags": [
	I0108 20:28:36.408419  103895 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0108 20:28:36.408426  103895 command_runner.go:130] >       ],
	I0108 20:28:36.408430  103895 command_runner.go:130] >       "repoDigests": [
	I0108 20:28:36.408440  103895 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0108 20:28:36.408449  103895 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0108 20:28:36.408455  103895 command_runner.go:130] >       ],
	I0108 20:28:36.408460  103895 command_runner.go:130] >       "size": "53621675",
	I0108 20:28:36.408466  103895 command_runner.go:130] >       "uid": null,
	I0108 20:28:36.408471  103895 command_runner.go:130] >       "username": "",
	I0108 20:28:36.408477  103895 command_runner.go:130] >       "spec": null,
	I0108 20:28:36.408483  103895 command_runner.go:130] >       "pinned": false
	I0108 20:28:36.408490  103895 command_runner.go:130] >     },
	I0108 20:28:36.408493  103895 command_runner.go:130] >     {
	I0108 20:28:36.408499  103895 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0108 20:28:36.408506  103895 command_runner.go:130] >       "repoTags": [
	I0108 20:28:36.408511  103895 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0108 20:28:36.408517  103895 command_runner.go:130] >       ],
	I0108 20:28:36.408521  103895 command_runner.go:130] >       "repoDigests": [
	I0108 20:28:36.408532  103895 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0108 20:28:36.408541  103895 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0108 20:28:36.408552  103895 command_runner.go:130] >       ],
	I0108 20:28:36.408559  103895 command_runner.go:130] >       "size": "295456551",
	I0108 20:28:36.408563  103895 command_runner.go:130] >       "uid": {
	I0108 20:28:36.408570  103895 command_runner.go:130] >         "value": "0"
	I0108 20:28:36.408574  103895 command_runner.go:130] >       },
	I0108 20:28:36.408580  103895 command_runner.go:130] >       "username": "",
	I0108 20:28:36.408584  103895 command_runner.go:130] >       "spec": null,
	I0108 20:28:36.408591  103895 command_runner.go:130] >       "pinned": false
	I0108 20:28:36.408594  103895 command_runner.go:130] >     },
	I0108 20:28:36.408600  103895 command_runner.go:130] >     {
	I0108 20:28:36.408606  103895 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0108 20:28:36.408613  103895 command_runner.go:130] >       "repoTags": [
	I0108 20:28:36.408618  103895 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0108 20:28:36.408624  103895 command_runner.go:130] >       ],
	I0108 20:28:36.408628  103895 command_runner.go:130] >       "repoDigests": [
	I0108 20:28:36.408637  103895 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0108 20:28:36.408649  103895 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0108 20:28:36.408655  103895 command_runner.go:130] >       ],
	I0108 20:28:36.408659  103895 command_runner.go:130] >       "size": "127226832",
	I0108 20:28:36.408663  103895 command_runner.go:130] >       "uid": {
	I0108 20:28:36.408668  103895 command_runner.go:130] >         "value": "0"
	I0108 20:28:36.408671  103895 command_runner.go:130] >       },
	I0108 20:28:36.408676  103895 command_runner.go:130] >       "username": "",
	I0108 20:28:36.408682  103895 command_runner.go:130] >       "spec": null,
	I0108 20:28:36.408686  103895 command_runner.go:130] >       "pinned": false
	I0108 20:28:36.408692  103895 command_runner.go:130] >     },
	I0108 20:28:36.408696  103895 command_runner.go:130] >     {
	I0108 20:28:36.408704  103895 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0108 20:28:36.408710  103895 command_runner.go:130] >       "repoTags": [
	I0108 20:28:36.408716  103895 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0108 20:28:36.408721  103895 command_runner.go:130] >       ],
	I0108 20:28:36.408728  103895 command_runner.go:130] >       "repoDigests": [
	I0108 20:28:36.408737  103895 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0108 20:28:36.408747  103895 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0108 20:28:36.408753  103895 command_runner.go:130] >       ],
	I0108 20:28:36.408760  103895 command_runner.go:130] >       "size": "123261750",
	I0108 20:28:36.408764  103895 command_runner.go:130] >       "uid": {
	I0108 20:28:36.408770  103895 command_runner.go:130] >         "value": "0"
	I0108 20:28:36.408774  103895 command_runner.go:130] >       },
	I0108 20:28:36.408781  103895 command_runner.go:130] >       "username": "",
	I0108 20:28:36.408785  103895 command_runner.go:130] >       "spec": null,
	I0108 20:28:36.408791  103895 command_runner.go:130] >       "pinned": false
	I0108 20:28:36.408794  103895 command_runner.go:130] >     },
	I0108 20:28:36.408800  103895 command_runner.go:130] >     {
	I0108 20:28:36.408806  103895 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0108 20:28:36.408812  103895 command_runner.go:130] >       "repoTags": [
	I0108 20:28:36.408817  103895 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0108 20:28:36.408823  103895 command_runner.go:130] >       ],
	I0108 20:28:36.408828  103895 command_runner.go:130] >       "repoDigests": [
	I0108 20:28:36.408837  103895 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0108 20:28:36.408844  103895 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0108 20:28:36.408850  103895 command_runner.go:130] >       ],
	I0108 20:28:36.408860  103895 command_runner.go:130] >       "size": "74749335",
	I0108 20:28:36.408867  103895 command_runner.go:130] >       "uid": null,
	I0108 20:28:36.408872  103895 command_runner.go:130] >       "username": "",
	I0108 20:28:36.408878  103895 command_runner.go:130] >       "spec": null,
	I0108 20:28:36.408882  103895 command_runner.go:130] >       "pinned": false
	I0108 20:28:36.408888  103895 command_runner.go:130] >     },
	I0108 20:28:36.408892  103895 command_runner.go:130] >     {
	I0108 20:28:36.408900  103895 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0108 20:28:36.408907  103895 command_runner.go:130] >       "repoTags": [
	I0108 20:28:36.408912  103895 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0108 20:28:36.408918  103895 command_runner.go:130] >       ],
	I0108 20:28:36.408922  103895 command_runner.go:130] >       "repoDigests": [
	I0108 20:28:36.408941  103895 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0108 20:28:36.408953  103895 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0108 20:28:36.408956  103895 command_runner.go:130] >       ],
	I0108 20:28:36.408961  103895 command_runner.go:130] >       "size": "61551410",
	I0108 20:28:36.408967  103895 command_runner.go:130] >       "uid": {
	I0108 20:28:36.408972  103895 command_runner.go:130] >         "value": "0"
	I0108 20:28:36.408980  103895 command_runner.go:130] >       },
	I0108 20:28:36.408987  103895 command_runner.go:130] >       "username": "",
	I0108 20:28:36.408991  103895 command_runner.go:130] >       "spec": null,
	I0108 20:28:36.408997  103895 command_runner.go:130] >       "pinned": false
	I0108 20:28:36.409001  103895 command_runner.go:130] >     },
	I0108 20:28:36.409007  103895 command_runner.go:130] >     {
	I0108 20:28:36.409019  103895 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0108 20:28:36.409026  103895 command_runner.go:130] >       "repoTags": [
	I0108 20:28:36.409030  103895 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0108 20:28:36.409036  103895 command_runner.go:130] >       ],
	I0108 20:28:36.409041  103895 command_runner.go:130] >       "repoDigests": [
	I0108 20:28:36.409050  103895 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0108 20:28:36.409059  103895 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0108 20:28:36.409065  103895 command_runner.go:130] >       ],
	I0108 20:28:36.409069  103895 command_runner.go:130] >       "size": "750414",
	I0108 20:28:36.409075  103895 command_runner.go:130] >       "uid": {
	I0108 20:28:36.409080  103895 command_runner.go:130] >         "value": "65535"
	I0108 20:28:36.409086  103895 command_runner.go:130] >       },
	I0108 20:28:36.409093  103895 command_runner.go:130] >       "username": "",
	I0108 20:28:36.409099  103895 command_runner.go:130] >       "spec": null,
	I0108 20:28:36.409104  103895 command_runner.go:130] >       "pinned": false
	I0108 20:28:36.409109  103895 command_runner.go:130] >     }
	I0108 20:28:36.409113  103895 command_runner.go:130] >   ]
	I0108 20:28:36.409118  103895 command_runner.go:130] > }
	I0108 20:28:36.409234  103895 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 20:28:36.409247  103895 cache_images.go:84] Images are preloaded, skipping loading
	I0108 20:28:36.409299  103895 ssh_runner.go:195] Run: crio config
	I0108 20:28:36.451497  103895 command_runner.go:130] ! time="2024-01-08 20:28:36.450961413Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0108 20:28:36.451534  103895 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0108 20:28:36.456694  103895 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0108 20:28:36.456718  103895 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0108 20:28:36.456724  103895 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0108 20:28:36.456728  103895 command_runner.go:130] > #
	I0108 20:28:36.456734  103895 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0108 20:28:36.456740  103895 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0108 20:28:36.456746  103895 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0108 20:28:36.456758  103895 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0108 20:28:36.456764  103895 command_runner.go:130] > # reload'.
	I0108 20:28:36.456781  103895 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0108 20:28:36.456792  103895 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0108 20:28:36.456798  103895 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0108 20:28:36.456804  103895 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0108 20:28:36.456811  103895 command_runner.go:130] > [crio]
	I0108 20:28:36.456817  103895 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0108 20:28:36.456824  103895 command_runner.go:130] > # containers images, in this directory.
	I0108 20:28:36.456833  103895 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0108 20:28:36.456842  103895 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0108 20:28:36.456849  103895 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0108 20:28:36.456855  103895 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0108 20:28:36.456863  103895 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0108 20:28:36.456870  103895 command_runner.go:130] > # storage_driver = "vfs"
	I0108 20:28:36.456876  103895 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0108 20:28:36.456884  103895 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0108 20:28:36.456888  103895 command_runner.go:130] > # storage_option = [
	I0108 20:28:36.456894  103895 command_runner.go:130] > # ]
	I0108 20:28:36.456904  103895 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0108 20:28:36.456912  103895 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0108 20:28:36.456919  103895 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0108 20:28:36.456924  103895 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0108 20:28:36.456932  103895 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0108 20:28:36.456939  103895 command_runner.go:130] > # always happen on a node reboot
	I0108 20:28:36.456948  103895 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0108 20:28:36.456956  103895 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0108 20:28:36.456964  103895 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0108 20:28:36.456975  103895 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0108 20:28:36.456982  103895 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0108 20:28:36.456989  103895 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0108 20:28:36.457000  103895 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0108 20:28:36.457007  103895 command_runner.go:130] > # internal_wipe = true
	I0108 20:28:36.457012  103895 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0108 20:28:36.457020  103895 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0108 20:28:36.457028  103895 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0108 20:28:36.457035  103895 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0108 20:28:36.457043  103895 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0108 20:28:36.457052  103895 command_runner.go:130] > [crio.api]
	I0108 20:28:36.457058  103895 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0108 20:28:36.457062  103895 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0108 20:28:36.457070  103895 command_runner.go:130] > # IP address on which the stream server will listen.
	I0108 20:28:36.457075  103895 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0108 20:28:36.457083  103895 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0108 20:28:36.457088  103895 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0108 20:28:36.457094  103895 command_runner.go:130] > # stream_port = "0"
	I0108 20:28:36.457100  103895 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0108 20:28:36.457106  103895 command_runner.go:130] > # stream_enable_tls = false
	I0108 20:28:36.457112  103895 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0108 20:28:36.457118  103895 command_runner.go:130] > # stream_idle_timeout = ""
	I0108 20:28:36.457125  103895 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0108 20:28:36.457133  103895 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0108 20:28:36.457138  103895 command_runner.go:130] > # minutes.
	I0108 20:28:36.457143  103895 command_runner.go:130] > # stream_tls_cert = ""
	I0108 20:28:36.457151  103895 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0108 20:28:36.457163  103895 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0108 20:28:36.457169  103895 command_runner.go:130] > # stream_tls_key = ""
	I0108 20:28:36.457175  103895 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0108 20:28:36.457184  103895 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0108 20:28:36.457191  103895 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0108 20:28:36.457198  103895 command_runner.go:130] > # stream_tls_ca = ""
	I0108 20:28:36.457206  103895 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 20:28:36.457213  103895 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0108 20:28:36.457220  103895 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 20:28:36.457226  103895 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0108 20:28:36.457247  103895 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0108 20:28:36.457256  103895 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0108 20:28:36.457260  103895 command_runner.go:130] > [crio.runtime]
	I0108 20:28:36.457265  103895 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0108 20:28:36.457271  103895 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0108 20:28:36.457277  103895 command_runner.go:130] > # "nofile=1024:2048"
	I0108 20:28:36.457284  103895 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0108 20:28:36.457290  103895 command_runner.go:130] > # default_ulimits = [
	I0108 20:28:36.457296  103895 command_runner.go:130] > # ]
	I0108 20:28:36.457304  103895 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0108 20:28:36.457310  103895 command_runner.go:130] > # no_pivot = false
	I0108 20:28:36.457316  103895 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0108 20:28:36.457324  103895 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0108 20:28:36.457331  103895 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0108 20:28:36.457337  103895 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0108 20:28:36.457344  103895 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0108 20:28:36.457350  103895 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 20:28:36.457357  103895 command_runner.go:130] > # conmon = ""
	I0108 20:28:36.457361  103895 command_runner.go:130] > # Cgroup setting for conmon
	I0108 20:28:36.457370  103895 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0108 20:28:36.457376  103895 command_runner.go:130] > conmon_cgroup = "pod"
	I0108 20:28:36.457382  103895 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0108 20:28:36.457390  103895 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0108 20:28:36.457399  103895 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 20:28:36.457405  103895 command_runner.go:130] > # conmon_env = [
	I0108 20:28:36.457409  103895 command_runner.go:130] > # ]
	I0108 20:28:36.457418  103895 command_runner.go:130] > # Additional environment variables to set for all the
	I0108 20:28:36.457426  103895 command_runner.go:130] > # containers. These are overridden if set in the
	I0108 20:28:36.457437  103895 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0108 20:28:36.457447  103895 command_runner.go:130] > # default_env = [
	I0108 20:28:36.457453  103895 command_runner.go:130] > # ]
	I0108 20:28:36.457459  103895 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0108 20:28:36.457465  103895 command_runner.go:130] > # selinux = false
	I0108 20:28:36.457471  103895 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0108 20:28:36.457479  103895 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0108 20:28:36.457488  103895 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0108 20:28:36.457495  103895 command_runner.go:130] > # seccomp_profile = ""
	I0108 20:28:36.457501  103895 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0108 20:28:36.457508  103895 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0108 20:28:36.457517  103895 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0108 20:28:36.457521  103895 command_runner.go:130] > # which might increase security.
	I0108 20:28:36.457528  103895 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0108 20:28:36.457534  103895 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0108 20:28:36.457542  103895 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0108 20:28:36.457553  103895 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0108 20:28:36.457562  103895 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0108 20:28:36.457569  103895 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:28:36.457573  103895 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0108 20:28:36.457581  103895 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0108 20:28:36.457586  103895 command_runner.go:130] > # the cgroup blockio controller.
	I0108 20:28:36.457592  103895 command_runner.go:130] > # blockio_config_file = ""
	I0108 20:28:36.457598  103895 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0108 20:28:36.457605  103895 command_runner.go:130] > # irqbalance daemon.
	I0108 20:28:36.457610  103895 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0108 20:28:36.457616  103895 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0108 20:28:36.457623  103895 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:28:36.457627  103895 command_runner.go:130] > # rdt_config_file = ""
	I0108 20:28:36.457634  103895 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0108 20:28:36.457641  103895 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0108 20:28:36.457647  103895 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0108 20:28:36.457654  103895 command_runner.go:130] > # separate_pull_cgroup = ""
	I0108 20:28:36.457660  103895 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0108 20:28:36.457671  103895 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0108 20:28:36.457677  103895 command_runner.go:130] > # will be added.
	I0108 20:28:36.457681  103895 command_runner.go:130] > # default_capabilities = [
	I0108 20:28:36.457687  103895 command_runner.go:130] > # 	"CHOWN",
	I0108 20:28:36.457691  103895 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0108 20:28:36.457695  103895 command_runner.go:130] > # 	"FSETID",
	I0108 20:28:36.457700  103895 command_runner.go:130] > # 	"FOWNER",
	I0108 20:28:36.457703  103895 command_runner.go:130] > # 	"SETGID",
	I0108 20:28:36.457709  103895 command_runner.go:130] > # 	"SETUID",
	I0108 20:28:36.457713  103895 command_runner.go:130] > # 	"SETPCAP",
	I0108 20:28:36.457720  103895 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0108 20:28:36.457724  103895 command_runner.go:130] > # 	"KILL",
	I0108 20:28:36.457729  103895 command_runner.go:130] > # ]
	I0108 20:28:36.457736  103895 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0108 20:28:36.457745  103895 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0108 20:28:36.457752  103895 command_runner.go:130] > # add_inheritable_capabilities = true
	I0108 20:28:36.457758  103895 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0108 20:28:36.457766  103895 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 20:28:36.457773  103895 command_runner.go:130] > # default_sysctls = [
	I0108 20:28:36.457777  103895 command_runner.go:130] > # ]
	I0108 20:28:36.457784  103895 command_runner.go:130] > # List of devices on the host that a
	I0108 20:28:36.457790  103895 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0108 20:28:36.457796  103895 command_runner.go:130] > # allowed_devices = [
	I0108 20:28:36.457800  103895 command_runner.go:130] > # 	"/dev/fuse",
	I0108 20:28:36.457806  103895 command_runner.go:130] > # ]
	I0108 20:28:36.457811  103895 command_runner.go:130] > # List of additional devices. specified as
	I0108 20:28:36.457845  103895 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0108 20:28:36.457854  103895 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0108 20:28:36.457860  103895 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 20:28:36.457864  103895 command_runner.go:130] > # additional_devices = [
	I0108 20:28:36.457867  103895 command_runner.go:130] > # ]
	I0108 20:28:36.457875  103895 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0108 20:28:36.457881  103895 command_runner.go:130] > # cdi_spec_dirs = [
	I0108 20:28:36.457885  103895 command_runner.go:130] > # 	"/etc/cdi",
	I0108 20:28:36.457891  103895 command_runner.go:130] > # 	"/var/run/cdi",
	I0108 20:28:36.457895  103895 command_runner.go:130] > # ]
	I0108 20:28:36.457906  103895 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0108 20:28:36.457915  103895 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0108 20:28:36.457921  103895 command_runner.go:130] > # Defaults to false.
	I0108 20:28:36.457928  103895 command_runner.go:130] > # device_ownership_from_security_context = false
	I0108 20:28:36.457934  103895 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0108 20:28:36.457944  103895 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0108 20:28:36.457951  103895 command_runner.go:130] > # hooks_dir = [
	I0108 20:28:36.457956  103895 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0108 20:28:36.457962  103895 command_runner.go:130] > # ]
	I0108 20:28:36.457968  103895 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0108 20:28:36.457976  103895 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0108 20:28:36.457983  103895 command_runner.go:130] > # its default mounts from the following two files:
	I0108 20:28:36.457989  103895 command_runner.go:130] > #
	I0108 20:28:36.457998  103895 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0108 20:28:36.458006  103895 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0108 20:28:36.458014  103895 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0108 20:28:36.458017  103895 command_runner.go:130] > #
	I0108 20:28:36.458025  103895 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0108 20:28:36.458033  103895 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0108 20:28:36.458042  103895 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0108 20:28:36.458049  103895 command_runner.go:130] > #      only add mounts it finds in this file.
	I0108 20:28:36.458053  103895 command_runner.go:130] > #
	I0108 20:28:36.458059  103895 command_runner.go:130] > # default_mounts_file = ""
	I0108 20:28:36.458064  103895 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0108 20:28:36.458073  103895 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0108 20:28:36.458079  103895 command_runner.go:130] > # pids_limit = 0
	I0108 20:28:36.458085  103895 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0108 20:28:36.458093  103895 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0108 20:28:36.458101  103895 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0108 20:28:36.458111  103895 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0108 20:28:36.458115  103895 command_runner.go:130] > # log_size_max = -1
	I0108 20:28:36.458128  103895 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0108 20:28:36.458135  103895 command_runner.go:130] > # log_to_journald = false
	I0108 20:28:36.458141  103895 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0108 20:28:36.458149  103895 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0108 20:28:36.458154  103895 command_runner.go:130] > # Path to directory for container attach sockets.
	I0108 20:28:36.458163  103895 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0108 20:28:36.458171  103895 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0108 20:28:36.458175  103895 command_runner.go:130] > # bind_mount_prefix = ""
	I0108 20:28:36.458182  103895 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0108 20:28:36.458186  103895 command_runner.go:130] > # read_only = false
	I0108 20:28:36.458195  103895 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0108 20:28:36.458201  103895 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0108 20:28:36.458208  103895 command_runner.go:130] > # live configuration reload.
	I0108 20:28:36.458212  103895 command_runner.go:130] > # log_level = "info"
	I0108 20:28:36.458220  103895 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0108 20:28:36.458227  103895 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:28:36.458231  103895 command_runner.go:130] > # log_filter = ""
	I0108 20:28:36.458239  103895 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0108 20:28:36.458247  103895 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0108 20:28:36.458257  103895 command_runner.go:130] > # separated by comma.
	I0108 20:28:36.458265  103895 command_runner.go:130] > # uid_mappings = ""
	I0108 20:28:36.458273  103895 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0108 20:28:36.458282  103895 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0108 20:28:36.458288  103895 command_runner.go:130] > # separated by comma.
	I0108 20:28:36.458294  103895 command_runner.go:130] > # gid_mappings = ""
	I0108 20:28:36.458300  103895 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0108 20:28:36.458309  103895 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 20:28:36.458320  103895 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 20:28:36.458327  103895 command_runner.go:130] > # minimum_mappable_uid = -1
	I0108 20:28:36.458333  103895 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0108 20:28:36.458341  103895 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 20:28:36.458350  103895 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 20:28:36.458356  103895 command_runner.go:130] > # minimum_mappable_gid = -1
	I0108 20:28:36.458363  103895 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0108 20:28:36.458370  103895 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0108 20:28:36.458378  103895 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0108 20:28:36.458385  103895 command_runner.go:130] > # ctr_stop_timeout = 30
	I0108 20:28:36.458391  103895 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0108 20:28:36.458401  103895 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0108 20:28:36.458408  103895 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0108 20:28:36.458413  103895 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0108 20:28:36.458422  103895 command_runner.go:130] > # drop_infra_ctr = true
	I0108 20:28:36.458430  103895 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0108 20:28:36.458436  103895 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0108 20:28:36.458449  103895 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0108 20:28:36.458456  103895 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0108 20:28:36.458462  103895 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0108 20:28:36.458469  103895 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0108 20:28:36.458474  103895 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0108 20:28:36.458482  103895 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0108 20:28:36.458488  103895 command_runner.go:130] > # pinns_path = ""
	I0108 20:28:36.458495  103895 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0108 20:28:36.458503  103895 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0108 20:28:36.458511  103895 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0108 20:28:36.458518  103895 command_runner.go:130] > # default_runtime = "runc"
	I0108 20:28:36.458523  103895 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0108 20:28:36.458532  103895 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0108 20:28:36.458543  103895 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0108 20:28:36.458549  103895 command_runner.go:130] > # creation as a file is not desired either.
	I0108 20:28:36.458563  103895 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0108 20:28:36.458570  103895 command_runner.go:130] > # the hostname is being managed dynamically.
	I0108 20:28:36.458575  103895 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0108 20:28:36.458580  103895 command_runner.go:130] > # ]
	I0108 20:28:36.458587  103895 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0108 20:28:36.458595  103895 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0108 20:28:36.458601  103895 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0108 20:28:36.458609  103895 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0108 20:28:36.458614  103895 command_runner.go:130] > #
	I0108 20:28:36.458619  103895 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0108 20:28:36.458626  103895 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0108 20:28:36.458630  103895 command_runner.go:130] > #  runtime_type = "oci"
	I0108 20:28:36.458637  103895 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0108 20:28:36.458642  103895 command_runner.go:130] > #  privileged_without_host_devices = false
	I0108 20:28:36.458648  103895 command_runner.go:130] > #  allowed_annotations = []
	I0108 20:28:36.458652  103895 command_runner.go:130] > # Where:
	I0108 20:28:36.458660  103895 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0108 20:28:36.458667  103895 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0108 20:28:36.458679  103895 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0108 20:28:36.458687  103895 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0108 20:28:36.458693  103895 command_runner.go:130] > #   in $PATH.
	I0108 20:28:36.458699  103895 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0108 20:28:36.458706  103895 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0108 20:28:36.458712  103895 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0108 20:28:36.458718  103895 command_runner.go:130] > #   state.
	I0108 20:28:36.458725  103895 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0108 20:28:36.458732  103895 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0108 20:28:36.458741  103895 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0108 20:28:36.458748  103895 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0108 20:28:36.458755  103895 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0108 20:28:36.458763  103895 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0108 20:28:36.458770  103895 command_runner.go:130] > #   The currently recognized values are:
	I0108 20:28:36.458776  103895 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0108 20:28:36.458786  103895 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0108 20:28:36.458793  103895 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0108 20:28:36.458801  103895 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0108 20:28:36.458811  103895 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0108 20:28:36.458820  103895 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0108 20:28:36.458828  103895 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0108 20:28:36.458837  103895 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0108 20:28:36.458845  103895 command_runner.go:130] > #   should be moved to the container's cgroup
	I0108 20:28:36.458854  103895 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0108 20:28:36.458862  103895 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0108 20:28:36.458866  103895 command_runner.go:130] > runtime_type = "oci"
	I0108 20:28:36.458872  103895 command_runner.go:130] > runtime_root = "/run/runc"
	I0108 20:28:36.458877  103895 command_runner.go:130] > runtime_config_path = ""
	I0108 20:28:36.458883  103895 command_runner.go:130] > monitor_path = ""
	I0108 20:28:36.458887  103895 command_runner.go:130] > monitor_cgroup = ""
	I0108 20:28:36.458894  103895 command_runner.go:130] > monitor_exec_cgroup = ""
	I0108 20:28:36.458957  103895 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0108 20:28:36.458965  103895 command_runner.go:130] > # running containers
	I0108 20:28:36.458969  103895 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0108 20:28:36.458975  103895 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0108 20:28:36.458981  103895 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0108 20:28:36.458992  103895 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0108 20:28:36.458999  103895 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0108 20:28:36.459004  103895 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0108 20:28:36.459009  103895 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0108 20:28:36.459014  103895 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0108 20:28:36.459021  103895 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0108 20:28:36.459029  103895 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0108 20:28:36.459035  103895 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0108 20:28:36.459043  103895 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0108 20:28:36.459049  103895 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0108 20:28:36.459056  103895 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0108 20:28:36.459066  103895 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0108 20:28:36.459072  103895 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0108 20:28:36.459083  103895 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0108 20:28:36.459092  103895 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0108 20:28:36.459101  103895 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0108 20:28:36.459111  103895 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0108 20:28:36.459117  103895 command_runner.go:130] > # Example:
	I0108 20:28:36.459125  103895 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0108 20:28:36.459132  103895 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0108 20:28:36.459137  103895 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0108 20:28:36.459144  103895 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0108 20:28:36.459148  103895 command_runner.go:130] > # cpuset = 0
	I0108 20:28:36.459153  103895 command_runner.go:130] > # cpushares = "0-1"
	I0108 20:28:36.459156  103895 command_runner.go:130] > # Where:
	I0108 20:28:36.459163  103895 command_runner.go:130] > # The workload name is workload-type.
	I0108 20:28:36.459170  103895 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0108 20:28:36.459178  103895 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0108 20:28:36.459187  103895 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0108 20:28:36.459196  103895 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0108 20:28:36.459209  103895 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0108 20:28:36.459214  103895 command_runner.go:130] > # 
	I0108 20:28:36.459220  103895 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0108 20:28:36.459226  103895 command_runner.go:130] > #
	I0108 20:28:36.459232  103895 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0108 20:28:36.459238  103895 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0108 20:28:36.459249  103895 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0108 20:28:36.459258  103895 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0108 20:28:36.459267  103895 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0108 20:28:36.459271  103895 command_runner.go:130] > [crio.image]
	I0108 20:28:36.459279  103895 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0108 20:28:36.459287  103895 command_runner.go:130] > # default_transport = "docker://"
	I0108 20:28:36.459296  103895 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0108 20:28:36.459305  103895 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0108 20:28:36.459309  103895 command_runner.go:130] > # global_auth_file = ""
	I0108 20:28:36.459316  103895 command_runner.go:130] > # The image used to instantiate infra containers.
	I0108 20:28:36.459321  103895 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:28:36.459328  103895 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0108 20:28:36.459334  103895 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0108 20:28:36.459342  103895 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0108 20:28:36.459350  103895 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:28:36.459354  103895 command_runner.go:130] > # pause_image_auth_file = ""
	I0108 20:28:36.459389  103895 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0108 20:28:36.459402  103895 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0108 20:28:36.459417  103895 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0108 20:28:36.459425  103895 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0108 20:28:36.459430  103895 command_runner.go:130] > # pause_command = "/pause"
	I0108 20:28:36.459436  103895 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0108 20:28:36.459449  103895 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0108 20:28:36.459457  103895 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0108 20:28:36.459466  103895 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0108 20:28:36.459474  103895 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0108 20:28:36.459482  103895 command_runner.go:130] > # signature_policy = ""
	I0108 20:28:36.459494  103895 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0108 20:28:36.459502  103895 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0108 20:28:36.459509  103895 command_runner.go:130] > # changing them here.
	I0108 20:28:36.459514  103895 command_runner.go:130] > # insecure_registries = [
	I0108 20:28:36.459519  103895 command_runner.go:130] > # ]
	I0108 20:28:36.459526  103895 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0108 20:28:36.459534  103895 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0108 20:28:36.459541  103895 command_runner.go:130] > # image_volumes = "mkdir"
	I0108 20:28:36.459547  103895 command_runner.go:130] > # Temporary directory to use for storing big files
	I0108 20:28:36.459581  103895 command_runner.go:130] > # big_files_temporary_dir = ""
	I0108 20:28:36.459601  103895 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0108 20:28:36.459608  103895 command_runner.go:130] > # CNI plugins.
	I0108 20:28:36.459612  103895 command_runner.go:130] > [crio.network]
	I0108 20:28:36.459621  103895 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0108 20:28:36.459628  103895 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0108 20:28:36.459635  103895 command_runner.go:130] > # cni_default_network = ""
	I0108 20:28:36.459641  103895 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0108 20:28:36.459649  103895 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0108 20:28:36.459654  103895 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0108 20:28:36.459661  103895 command_runner.go:130] > # plugin_dirs = [
	I0108 20:28:36.459666  103895 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0108 20:28:36.459672  103895 command_runner.go:130] > # ]
	I0108 20:28:36.459678  103895 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0108 20:28:36.459684  103895 command_runner.go:130] > [crio.metrics]
	I0108 20:28:36.459690  103895 command_runner.go:130] > # Globally enable or disable metrics support.
	I0108 20:28:36.459697  103895 command_runner.go:130] > # enable_metrics = false
	I0108 20:28:36.459702  103895 command_runner.go:130] > # Specify enabled metrics collectors.
	I0108 20:28:36.459712  103895 command_runner.go:130] > # Per default all metrics are enabled.
	I0108 20:28:36.459720  103895 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0108 20:28:36.459728  103895 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0108 20:28:36.459735  103895 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0108 20:28:36.459742  103895 command_runner.go:130] > # metrics_collectors = [
	I0108 20:28:36.459746  103895 command_runner.go:130] > # 	"operations",
	I0108 20:28:36.459753  103895 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0108 20:28:36.459758  103895 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0108 20:28:36.459764  103895 command_runner.go:130] > # 	"operations_errors",
	I0108 20:28:36.459769  103895 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0108 20:28:36.459775  103895 command_runner.go:130] > # 	"image_pulls_by_name",
	I0108 20:28:36.459780  103895 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0108 20:28:36.459787  103895 command_runner.go:130] > # 	"image_pulls_failures",
	I0108 20:28:36.459792  103895 command_runner.go:130] > # 	"image_pulls_successes",
	I0108 20:28:36.459798  103895 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0108 20:28:36.459803  103895 command_runner.go:130] > # 	"image_layer_reuse",
	I0108 20:28:36.459809  103895 command_runner.go:130] > # 	"containers_oom_total",
	I0108 20:28:36.459814  103895 command_runner.go:130] > # 	"containers_oom",
	I0108 20:28:36.459825  103895 command_runner.go:130] > # 	"processes_defunct",
	I0108 20:28:36.459832  103895 command_runner.go:130] > # 	"operations_total",
	I0108 20:28:36.459836  103895 command_runner.go:130] > # 	"operations_latency_seconds",
	I0108 20:28:36.459844  103895 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0108 20:28:36.459849  103895 command_runner.go:130] > # 	"operations_errors_total",
	I0108 20:28:36.459856  103895 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0108 20:28:36.459860  103895 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0108 20:28:36.459872  103895 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0108 20:28:36.459880  103895 command_runner.go:130] > # 	"image_pulls_success_total",
	I0108 20:28:36.459884  103895 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0108 20:28:36.459891  103895 command_runner.go:130] > # 	"containers_oom_count_total",
	I0108 20:28:36.459895  103895 command_runner.go:130] > # ]
	I0108 20:28:36.459900  103895 command_runner.go:130] > # The port on which the metrics server will listen.
	I0108 20:28:36.459907  103895 command_runner.go:130] > # metrics_port = 9090
	I0108 20:28:36.459912  103895 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0108 20:28:36.459918  103895 command_runner.go:130] > # metrics_socket = ""
	I0108 20:28:36.459926  103895 command_runner.go:130] > # The certificate for the secure metrics server.
	I0108 20:28:36.459934  103895 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0108 20:28:36.459945  103895 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0108 20:28:36.459952  103895 command_runner.go:130] > # certificate on any modification event.
	I0108 20:28:36.459956  103895 command_runner.go:130] > # metrics_cert = ""
	I0108 20:28:36.459964  103895 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0108 20:28:36.459969  103895 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0108 20:28:36.459975  103895 command_runner.go:130] > # metrics_key = ""
	I0108 20:28:36.459981  103895 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0108 20:28:36.459987  103895 command_runner.go:130] > [crio.tracing]
	I0108 20:28:36.459993  103895 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0108 20:28:36.459999  103895 command_runner.go:130] > # enable_tracing = false
	I0108 20:28:36.460005  103895 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0108 20:28:36.460013  103895 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0108 20:28:36.460018  103895 command_runner.go:130] > # Number of samples to collect per million spans.
	I0108 20:28:36.460025  103895 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0108 20:28:36.460031  103895 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0108 20:28:36.460037  103895 command_runner.go:130] > [crio.stats]
	I0108 20:28:36.460044  103895 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0108 20:28:36.460051  103895 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0108 20:28:36.460063  103895 command_runner.go:130] > # stats_collection_period = 0
	I0108 20:28:36.460179  103895 cni.go:84] Creating CNI manager for ""
	I0108 20:28:36.460192  103895 cni.go:136] 1 nodes found, recommending kindnet
	I0108 20:28:36.460218  103895 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 20:28:36.460253  103895 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-209824 NodeName:multinode-209824 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 20:28:36.460427  103895 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-209824"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 20:28:36.460509  103895 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-209824 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-209824 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 20:28:36.460578  103895 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 20:28:36.469527  103895 command_runner.go:130] > kubeadm
	I0108 20:28:36.469545  103895 command_runner.go:130] > kubectl
	I0108 20:28:36.469549  103895 command_runner.go:130] > kubelet
	I0108 20:28:36.470168  103895 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 20:28:36.470218  103895 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 20:28:36.477900  103895 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0108 20:28:36.496510  103895 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 20:28:36.518101  103895 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0108 20:28:36.539830  103895 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0108 20:28:36.543960  103895 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:28:36.556145  103895 certs.go:56] Setting up /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824 for IP: 192.168.58.2
	I0108 20:28:36.556185  103895 certs.go:190] acquiring lock for shared ca certs: {Name:mk77871b3b3f5891ac4ba9a63281bc46e0e62e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:28:36.556324  103895 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17907-11003/.minikube/ca.key
	I0108 20:28:36.556370  103895 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17907-11003/.minikube/proxy-client-ca.key
	I0108 20:28:36.556444  103895 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/client.key
	I0108 20:28:36.556465  103895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/client.crt with IP's: []
	I0108 20:28:36.633468  103895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/client.crt ...
	I0108 20:28:36.633526  103895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/client.crt: {Name:mkaf55fdebd27d4de29db6f636d451ca1c39ceee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:28:36.633807  103895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/client.key ...
	I0108 20:28:36.633824  103895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/client.key: {Name:mk7ec47341a983ebcc106572dfaf03a3e4ac02cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:28:36.633945  103895 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/apiserver.key.cee25041
	I0108 20:28:36.633967  103895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 20:28:36.779408  103895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/apiserver.crt.cee25041 ...
	I0108 20:28:36.779462  103895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/apiserver.crt.cee25041: {Name:mkabf8a84b729e2f419b0ef0922566d065fe5661 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:28:36.779733  103895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/apiserver.key.cee25041 ...
	I0108 20:28:36.779756  103895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/apiserver.key.cee25041: {Name:mkd23b649394c263a4887e325565162720bcd8a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:28:36.779886  103895 certs.go:337] copying /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/apiserver.crt
	I0108 20:28:36.780014  103895 certs.go:341] copying /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/apiserver.key
	I0108 20:28:36.780110  103895 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/proxy-client.key
	I0108 20:28:36.780134  103895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/proxy-client.crt with IP's: []
	I0108 20:28:37.011830  103895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/proxy-client.crt ...
	I0108 20:28:37.011875  103895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/proxy-client.crt: {Name:mkcbba8f337de86fe6acd466c88691a6a98668a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:28:37.012117  103895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/proxy-client.key ...
	I0108 20:28:37.012141  103895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/proxy-client.key: {Name:mk7b3c7c6b32211c00b091341014e19fc8672a98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:28:37.012247  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 20:28:37.012274  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 20:28:37.012293  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 20:28:37.012312  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 20:28:37.012329  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 20:28:37.012351  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 20:28:37.012371  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 20:28:37.012394  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 20:28:37.012469  103895 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/home/jenkins/minikube-integration/17907-11003/.minikube/certs/17761.pem (1338 bytes)
	W0108 20:28:37.012526  103895 certs.go:433] ignoring /home/jenkins/minikube-integration/17907-11003/.minikube/certs/home/jenkins/minikube-integration/17907-11003/.minikube/certs/17761_empty.pem, impossibly tiny 0 bytes
	I0108 20:28:37.012554  103895 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 20:28:37.012595  103895 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem (1078 bytes)
	I0108 20:28:37.012634  103895 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/home/jenkins/minikube-integration/17907-11003/.minikube/certs/cert.pem (1123 bytes)
	I0108 20:28:37.012670  103895 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/home/jenkins/minikube-integration/17907-11003/.minikube/certs/key.pem (1679 bytes)
	I0108 20:28:37.012728  103895 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/ssl/certs/177612.pem (1708 bytes)
	I0108 20:28:37.012771  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/ssl/certs/177612.pem -> /usr/share/ca-certificates/177612.pem
	I0108 20:28:37.012794  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:28:37.012811  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/17761.pem -> /usr/share/ca-certificates/17761.pem
	I0108 20:28:37.013422  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 20:28:37.041103  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 20:28:37.066694  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 20:28:37.091943  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 20:28:37.117887  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 20:28:37.144383  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 20:28:37.170752  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 20:28:37.195391  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 20:28:37.219152  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/ssl/certs/177612.pem --> /usr/share/ca-certificates/177612.pem (1708 bytes)
	I0108 20:28:37.244457  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 20:28:37.271086  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/certs/17761.pem --> /usr/share/ca-certificates/17761.pem (1338 bytes)
	I0108 20:28:37.296202  103895 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 20:28:37.315312  103895 ssh_runner.go:195] Run: openssl version
	I0108 20:28:37.320956  103895 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0108 20:28:37.321036  103895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177612.pem && ln -fs /usr/share/ca-certificates/177612.pem /etc/ssl/certs/177612.pem"
	I0108 20:28:37.330588  103895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177612.pem
	I0108 20:28:37.334314  103895 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 20:16 /usr/share/ca-certificates/177612.pem
	I0108 20:28:37.334356  103895 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:16 /usr/share/ca-certificates/177612.pem
	I0108 20:28:37.334405  103895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177612.pem
	I0108 20:28:37.341886  103895 command_runner.go:130] > 3ec20f2e
	I0108 20:28:37.342006  103895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177612.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 20:28:37.352418  103895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 20:28:37.362702  103895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:28:37.366731  103895 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:28:37.366820  103895 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:28:37.366894  103895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:28:37.374939  103895 command_runner.go:130] > b5213941
	I0108 20:28:37.375078  103895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 20:28:37.385634  103895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17761.pem && ln -fs /usr/share/ca-certificates/17761.pem /etc/ssl/certs/17761.pem"
	I0108 20:28:37.395777  103895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17761.pem
	I0108 20:28:37.399499  103895 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 20:16 /usr/share/ca-certificates/17761.pem
	I0108 20:28:37.399549  103895 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:16 /usr/share/ca-certificates/17761.pem
	I0108 20:28:37.399602  103895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17761.pem
	I0108 20:28:37.406803  103895 command_runner.go:130] > 51391683
	I0108 20:28:37.406871  103895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17761.pem /etc/ssl/certs/51391683.0"
	I0108 20:28:37.416176  103895 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 20:28:37.419872  103895 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 20:28:37.419945  103895 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 20:28:37.419997  103895 kubeadm.go:404] StartCluster: {Name:multinode-209824 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-209824 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:28:37.420073  103895 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 20:28:37.420138  103895 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 20:28:37.457307  103895 cri.go:89] found id: ""
	I0108 20:28:37.457417  103895 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 20:28:37.467025  103895 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0108 20:28:37.467058  103895 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0108 20:28:37.467066  103895 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0108 20:28:37.467134  103895 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 20:28:37.476449  103895 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 20:28:37.476539  103895 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 20:28:37.485259  103895 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0108 20:28:37.485290  103895 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0108 20:28:37.485299  103895 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0108 20:28:37.485311  103895 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 20:28:37.485349  103895 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 20:28:37.485386  103895 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 20:28:37.536865  103895 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 20:28:37.536903  103895 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0108 20:28:37.536943  103895 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 20:28:37.536975  103895 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 20:28:37.576257  103895 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0108 20:28:37.576305  103895 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0108 20:28:37.576393  103895 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I0108 20:28:37.576405  103895 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-gcp
	I0108 20:28:37.576455  103895 kubeadm.go:322] OS: Linux
	I0108 20:28:37.576466  103895 command_runner.go:130] > OS: Linux
	I0108 20:28:37.576515  103895 kubeadm.go:322] CGROUPS_CPU: enabled
	I0108 20:28:37.576523  103895 command_runner.go:130] > CGROUPS_CPU: enabled
	I0108 20:28:37.576569  103895 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0108 20:28:37.576591  103895 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0108 20:28:37.576643  103895 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0108 20:28:37.576650  103895 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0108 20:28:37.576716  103895 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0108 20:28:37.576735  103895 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0108 20:28:37.576784  103895 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0108 20:28:37.576791  103895 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0108 20:28:37.576830  103895 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0108 20:28:37.576843  103895 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0108 20:28:37.576895  103895 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0108 20:28:37.576902  103895 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0108 20:28:37.576962  103895 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0108 20:28:37.576974  103895 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0108 20:28:37.577035  103895 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0108 20:28:37.577044  103895 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0108 20:28:37.650885  103895 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 20:28:37.650921  103895 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 20:28:37.650996  103895 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 20:28:37.651008  103895 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 20:28:37.651111  103895 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 20:28:37.651125  103895 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 20:28:37.879626  103895 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 20:28:37.883583  103895 out.go:204]   - Generating certificates and keys ...
	I0108 20:28:37.879762  103895 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 20:28:37.883711  103895 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 20:28:37.883735  103895 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0108 20:28:37.883790  103895 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 20:28:37.883797  103895 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0108 20:28:38.141828  103895 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 20:28:38.141863  103895 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 20:28:38.266609  103895 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 20:28:38.266638  103895 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0108 20:28:38.330575  103895 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 20:28:38.330609  103895 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0108 20:28:38.452813  103895 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 20:28:38.452855  103895 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0108 20:28:38.743243  103895 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 20:28:38.743286  103895 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0108 20:28:38.743479  103895 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-209824] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0108 20:28:38.743498  103895 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-209824] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0108 20:28:38.930955  103895 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 20:28:38.930999  103895 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0108 20:28:38.931169  103895 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-209824] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0108 20:28:38.931181  103895 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-209824] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0108 20:28:39.172795  103895 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 20:28:39.172834  103895 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 20:28:39.343548  103895 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 20:28:39.343601  103895 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 20:28:39.554855  103895 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 20:28:39.554895  103895 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0108 20:28:39.554951  103895 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 20:28:39.554959  103895 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 20:28:39.741568  103895 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 20:28:39.741614  103895 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 20:28:39.973007  103895 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 20:28:39.973040  103895 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 20:28:40.160122  103895 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 20:28:40.160151  103895 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 20:28:40.380832  103895 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 20:28:40.380879  103895 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 20:28:40.381276  103895 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 20:28:40.381300  103895 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 20:28:40.384850  103895 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 20:28:40.387811  103895 out.go:204]   - Booting up control plane ...
	I0108 20:28:40.384940  103895 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 20:28:40.387943  103895 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 20:28:40.387975  103895 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 20:28:40.388051  103895 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 20:28:40.388063  103895 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 20:28:40.388132  103895 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 20:28:40.388140  103895 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 20:28:40.398122  103895 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 20:28:40.398189  103895 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 20:28:40.398920  103895 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 20:28:40.398944  103895 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 20:28:40.398999  103895 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 20:28:40.399012  103895 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 20:28:40.486451  103895 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 20:28:40.486478  103895 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 20:28:45.989231  103895 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.502857 seconds
	I0108 20:28:45.989259  103895 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.502857 seconds
	I0108 20:28:45.989398  103895 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 20:28:45.989432  103895 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 20:28:46.006681  103895 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 20:28:46.006710  103895 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 20:28:46.531162  103895 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 20:28:46.531189  103895 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0108 20:28:46.531433  103895 kubeadm.go:322] [mark-control-plane] Marking the node multinode-209824 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 20:28:46.531448  103895 command_runner.go:130] > [mark-control-plane] Marking the node multinode-209824 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 20:28:47.042599  103895 kubeadm.go:322] [bootstrap-token] Using token: stmars.05yizflf9zmwgn33
	I0108 20:28:47.044681  103895 out.go:204]   - Configuring RBAC rules ...
	I0108 20:28:47.042653  103895 command_runner.go:130] > [bootstrap-token] Using token: stmars.05yizflf9zmwgn33
	I0108 20:28:47.044850  103895 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 20:28:47.044876  103895 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 20:28:47.051196  103895 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 20:28:47.051235  103895 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 20:28:47.062069  103895 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 20:28:47.062115  103895 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 20:28:47.066331  103895 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 20:28:47.066365  103895 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 20:28:47.071397  103895 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 20:28:47.071411  103895 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 20:28:47.075031  103895 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 20:28:47.075058  103895 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 20:28:47.090786  103895 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 20:28:47.090821  103895 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 20:28:47.336248  103895 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 20:28:47.336288  103895 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0108 20:28:47.495061  103895 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 20:28:47.495112  103895 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0108 20:28:47.495120  103895 kubeadm.go:322] 
	I0108 20:28:47.495189  103895 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 20:28:47.495196  103895 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0108 20:28:47.495202  103895 kubeadm.go:322] 
	I0108 20:28:47.495291  103895 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 20:28:47.495304  103895 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0108 20:28:47.495310  103895 kubeadm.go:322] 
	I0108 20:28:47.495338  103895 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 20:28:47.495344  103895 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0108 20:28:47.495447  103895 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 20:28:47.495464  103895 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 20:28:47.495532  103895 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 20:28:47.495546  103895 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 20:28:47.495552  103895 kubeadm.go:322] 
	I0108 20:28:47.495646  103895 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 20:28:47.495672  103895 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0108 20:28:47.495684  103895 kubeadm.go:322] 
	I0108 20:28:47.495733  103895 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 20:28:47.495740  103895 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 20:28:47.495745  103895 kubeadm.go:322] 
	I0108 20:28:47.495839  103895 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 20:28:47.495848  103895 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0108 20:28:47.495956  103895 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 20:28:47.495981  103895 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 20:28:47.496072  103895 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 20:28:47.496086  103895 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 20:28:47.496092  103895 kubeadm.go:322] 
	I0108 20:28:47.496193  103895 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0108 20:28:47.496208  103895 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 20:28:47.496309  103895 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0108 20:28:47.496317  103895 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 20:28:47.496324  103895 kubeadm.go:322] 
	I0108 20:28:47.496443  103895 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token stmars.05yizflf9zmwgn33 \
	I0108 20:28:47.496454  103895 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token stmars.05yizflf9zmwgn33 \
	I0108 20:28:47.496590  103895 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:5f0d3868e129d146f2f118c1d4d93dd4eee494642df3f8db5a7e17a4b1fd36d7 \
	I0108 20:28:47.496603  103895 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5f0d3868e129d146f2f118c1d4d93dd4eee494642df3f8db5a7e17a4b1fd36d7 \
	I0108 20:28:47.496640  103895 command_runner.go:130] > 	--control-plane 
	I0108 20:28:47.496649  103895 kubeadm.go:322] 	--control-plane 
	I0108 20:28:47.496655  103895 kubeadm.go:322] 
	I0108 20:28:47.496772  103895 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0108 20:28:47.496787  103895 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 20:28:47.496794  103895 kubeadm.go:322] 
	I0108 20:28:47.496898  103895 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token stmars.05yizflf9zmwgn33 \
	I0108 20:28:47.496912  103895 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token stmars.05yizflf9zmwgn33 \
	I0108 20:28:47.497044  103895 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:5f0d3868e129d146f2f118c1d4d93dd4eee494642df3f8db5a7e17a4b1fd36d7 
	I0108 20:28:47.497065  103895 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5f0d3868e129d146f2f118c1d4d93dd4eee494642df3f8db5a7e17a4b1fd36d7 
	I0108 20:28:47.500610  103895 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I0108 20:28:47.500661  103895 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I0108 20:28:47.500778  103895 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 20:28:47.500807  103895 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 20:28:47.500838  103895 cni.go:84] Creating CNI manager for ""
	I0108 20:28:47.500852  103895 cni.go:136] 1 nodes found, recommending kindnet
	I0108 20:28:47.503415  103895 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 20:28:47.505189  103895 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 20:28:47.511639  103895 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 20:28:47.511688  103895 command_runner.go:130] >   Size: 4085020   	Blocks: 7984       IO Block: 4096   regular file
	I0108 20:28:47.511699  103895 command_runner.go:130] > Device: 37h/55d	Inode: 573813      Links: 1
	I0108 20:28:47.511707  103895 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 20:28:47.511763  103895 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I0108 20:28:47.511778  103895 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I0108 20:28:47.511792  103895 command_runner.go:130] > Change: 2024-01-08 20:09:53.352435730 +0000
	I0108 20:28:47.511802  103895 command_runner.go:130] >  Birth: 2024-01-08 20:09:53.328433283 +0000
	I0108 20:28:47.511881  103895 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 20:28:47.511903  103895 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 20:28:47.598806  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 20:28:48.400735  103895 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0108 20:28:48.410509  103895 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0108 20:28:48.421267  103895 command_runner.go:130] > serviceaccount/kindnet created
	I0108 20:28:48.433831  103895 command_runner.go:130] > daemonset.apps/kindnet created
	I0108 20:28:48.438759  103895 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 20:28:48.438844  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28 minikube.k8s.io/name=multinode-209824 minikube.k8s.io/updated_at=2024_01_08T20_28_48_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:48.438857  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:48.513518  103895 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0108 20:28:48.518797  103895 command_runner.go:130] > -16
	I0108 20:28:48.518827  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:48.518847  103895 ops.go:34] apiserver oom_adj: -16
	I0108 20:28:48.589824  103895 command_runner.go:130] > node/multinode-209824 labeled
	I0108 20:28:48.691400  103895 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:28:49.019837  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:49.088309  103895 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:28:49.519294  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:49.590833  103895 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:28:50.019581  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:50.089034  103895 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:28:50.518954  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:50.589888  103895 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:28:51.019315  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:51.091436  103895 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:28:51.519040  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:51.588954  103895 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:28:52.019077  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:52.085751  103895 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:28:52.519557  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:52.590097  103895 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:28:53.019806  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:53.087748  103895 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:28:53.519023  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:53.588618  103895 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:28:54.019112  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:54.095452  103895 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:28:54.519029  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:54.591191  103895 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:28:55.019865  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:55.087683  103895 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:28:55.519112  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:55.598720  103895 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:28:56.019243  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:56.088453  103895 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:28:56.519791  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:56.594523  103895 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:28:57.019070  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:57.087959  103895 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:28:57.519653  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:57.590382  103895 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:28:58.019732  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:58.091491  103895 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:28:58.519150  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:58.587164  103895 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:28:59.018927  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:59.095563  103895 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:28:59.519083  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:59.594309  103895 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:29:00.018945  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:29:00.229433  103895 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:29:00.518943  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:29:00.601673  103895 command_runner.go:130] > NAME      SECRETS   AGE
	I0108 20:29:00.601717  103895 command_runner.go:130] > default   0         0s
	I0108 20:29:00.605851  103895 kubeadm.go:1088] duration metric: took 12.167096826s to wait for elevateKubeSystemPrivileges.
	I0108 20:29:00.605893  103895 kubeadm.go:406] StartCluster complete in 23.185899536s
	I0108 20:29:00.605915  103895 settings.go:142] acquiring lock: {Name:mk2f02a606763d8db203f5ac009c4f8430c5c61d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:29:00.605992  103895 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17907-11003/kubeconfig
	I0108 20:29:00.606901  103895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-11003/kubeconfig: {Name:mkc68e8b275b7f7ddea94f238057103f0099d605 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:29:00.607190  103895 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 20:29:00.607328  103895 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 20:29:00.607437  103895 addons.go:69] Setting storage-provisioner=true in profile "multinode-209824"
	I0108 20:29:00.607459  103895 addons.go:69] Setting default-storageclass=true in profile "multinode-209824"
	I0108 20:29:00.607466  103895 addons.go:237] Setting addon storage-provisioner=true in "multinode-209824"
	I0108 20:29:00.607476  103895 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-209824"
	I0108 20:29:00.607477  103895 config.go:182] Loaded profile config "multinode-209824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:29:00.607530  103895 host.go:66] Checking if "multinode-209824" exists ...
	I0108 20:29:00.607625  103895 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17907-11003/kubeconfig
	I0108 20:29:00.607847  103895 kapi.go:59] client config for multinode-209824: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/client.key", CAFile:"/home/jenkins/minikube-integration/17907-11003/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:29:00.607937  103895 cli_runner.go:164] Run: docker container inspect multinode-209824 --format={{.State.Status}}
	I0108 20:29:00.608107  103895 cli_runner.go:164] Run: docker container inspect multinode-209824 --format={{.State.Status}}
	I0108 20:29:00.608638  103895 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 20:29:00.608992  103895 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 20:29:00.609011  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:00.609031  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:00.609041  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:00.621041  103895 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0108 20:29:00.621082  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:00.621095  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:00.621106  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:00.621116  103895 round_trippers.go:580]     Content-Length: 291
	I0108 20:29:00.621125  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:00 GMT
	I0108 20:29:00.621133  103895 round_trippers.go:580]     Audit-Id: f054e461-a6d0-42b3-b096-f790437635fb
	I0108 20:29:00.621142  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:00.621154  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:00.621225  103895 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f9e25afe-d819-409b-ac6a-2d6befc195f3","resourceVersion":"367","creationTimestamp":"2024-01-08T20:28:47Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0108 20:29:00.621899  103895 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f9e25afe-d819-409b-ac6a-2d6befc195f3","resourceVersion":"367","creationTimestamp":"2024-01-08T20:28:47Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0108 20:29:00.622015  103895 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 20:29:00.622032  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:00.622042  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:00.622053  103895 round_trippers.go:473]     Content-Type: application/json
	I0108 20:29:00.622063  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:00.632136  103895 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0108 20:29:00.632172  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:00.632184  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:00.632193  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:00.632202  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:00.632211  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:00.632219  103895 round_trippers.go:580]     Content-Length: 291
	I0108 20:29:00.632227  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:00 GMT
	I0108 20:29:00.632235  103895 round_trippers.go:580]     Audit-Id: df0f9b8e-3f5d-457d-94c8-ef0dcdb4d277
	I0108 20:29:00.632274  103895 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f9e25afe-d819-409b-ac6a-2d6befc195f3","resourceVersion":"386","creationTimestamp":"2024-01-08T20:28:47Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0108 20:29:00.633475  103895 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17907-11003/kubeconfig
	I0108 20:29:00.635940  103895 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:29:00.633819  103895 kapi.go:59] client config for multinode-209824: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/client.key", CAFile:"/home/jenkins/minikube-integration/17907-11003/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:29:00.638003  103895 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:29:00.638033  103895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 20:29:00.638097  103895 addons.go:237] Setting addon default-storageclass=true in "multinode-209824"
	I0108 20:29:00.638141  103895 host.go:66] Checking if "multinode-209824" exists ...
	I0108 20:29:00.638101  103895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-209824
	I0108 20:29:00.638615  103895 cli_runner.go:164] Run: docker container inspect multinode-209824 --format={{.State.Status}}
	I0108 20:29:00.662913  103895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/multinode-209824/id_rsa Username:docker}
	I0108 20:29:00.665624  103895 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 20:29:00.665655  103895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 20:29:00.665758  103895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-209824
	I0108 20:29:00.684981  103895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/multinode-209824/id_rsa Username:docker}
	I0108 20:29:00.799318  103895 command_runner.go:130] > apiVersion: v1
	I0108 20:29:00.799344  103895 command_runner.go:130] > data:
	I0108 20:29:00.799351  103895 command_runner.go:130] >   Corefile: |
	I0108 20:29:00.799375  103895 command_runner.go:130] >     .:53 {
	I0108 20:29:00.799382  103895 command_runner.go:130] >         errors
	I0108 20:29:00.799390  103895 command_runner.go:130] >         health {
	I0108 20:29:00.799397  103895 command_runner.go:130] >            lameduck 5s
	I0108 20:29:00.799403  103895 command_runner.go:130] >         }
	I0108 20:29:00.799410  103895 command_runner.go:130] >         ready
	I0108 20:29:00.799419  103895 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0108 20:29:00.799430  103895 command_runner.go:130] >            pods insecure
	I0108 20:29:00.799439  103895 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0108 20:29:00.799454  103895 command_runner.go:130] >            ttl 30
	I0108 20:29:00.799464  103895 command_runner.go:130] >         }
	I0108 20:29:00.799471  103895 command_runner.go:130] >         prometheus :9153
	I0108 20:29:00.799487  103895 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0108 20:29:00.799497  103895 command_runner.go:130] >            max_concurrent 1000
	I0108 20:29:00.799507  103895 command_runner.go:130] >         }
	I0108 20:29:00.799514  103895 command_runner.go:130] >         cache 30
	I0108 20:29:00.799521  103895 command_runner.go:130] >         loop
	I0108 20:29:00.799527  103895 command_runner.go:130] >         reload
	I0108 20:29:00.799536  103895 command_runner.go:130] >         loadbalance
	I0108 20:29:00.799542  103895 command_runner.go:130] >     }
	I0108 20:29:00.799552  103895 command_runner.go:130] > kind: ConfigMap
	I0108 20:29:00.799559  103895 command_runner.go:130] > metadata:
	I0108 20:29:00.799573  103895 command_runner.go:130] >   creationTimestamp: "2024-01-08T20:28:47Z"
	I0108 20:29:00.799582  103895 command_runner.go:130] >   name: coredns
	I0108 20:29:00.799592  103895 command_runner.go:130] >   namespace: kube-system
	I0108 20:29:00.799606  103895 command_runner.go:130] >   resourceVersion: "266"
	I0108 20:29:00.799619  103895 command_runner.go:130] >   uid: 8914502c-1cb5-4245-be44-38251fc77caa
	I0108 20:29:00.803244  103895 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 20:29:00.911413  103895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 20:29:00.912019  103895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:29:01.109409  103895 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 20:29:01.109439  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:01.109448  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:01.109455  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:01.112684  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:01.112704  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:01.112711  103895 round_trippers.go:580]     Content-Length: 291
	I0108 20:29:01.112716  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:01 GMT
	I0108 20:29:01.112721  103895 round_trippers.go:580]     Audit-Id: dae03f64-46d7-43d9-9340-428c0e681c79
	I0108 20:29:01.112726  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:01.112731  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:01.112736  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:01.112742  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:01.112977  103895 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f9e25afe-d819-409b-ac6a-2d6befc195f3","resourceVersion":"397","creationTimestamp":"2024-01-08T20:28:47Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 20:29:01.113115  103895 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-209824" context rescaled to 1 replicas
	I0108 20:29:01.113146  103895 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 20:29:01.115286  103895 out.go:177] * Verifying Kubernetes components...
	I0108 20:29:01.116767  103895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:29:01.589803  103895 command_runner.go:130] > configmap/coredns replaced
	I0108 20:29:01.596164  103895 start.go:929] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0108 20:29:01.596230  103895 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0108 20:29:01.596393  103895 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0108 20:29:01.596406  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:01.596418  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:01.596429  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:01.598282  103895 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 20:29:01.598303  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:01.598314  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:01 GMT
	I0108 20:29:01.598324  103895 round_trippers.go:580]     Audit-Id: 7a45d6fe-d9de-4aa9-8b9a-ac9f42d342f6
	I0108 20:29:01.598336  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:01.598352  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:01.598362  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:01.598375  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:01.598387  103895 round_trippers.go:580]     Content-Length: 1273
	I0108 20:29:01.598428  103895 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"406"},"items":[{"metadata":{"name":"standard","uid":"005e53eb-ae0e-4184-a542-aaa09b041e64","resourceVersion":"405","creationTimestamp":"2024-01-08T20:29:01Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T20:29:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0108 20:29:01.598901  103895 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"005e53eb-ae0e-4184-a542-aaa09b041e64","resourceVersion":"405","creationTimestamp":"2024-01-08T20:29:01Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T20:29:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0108 20:29:01.598962  103895 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0108 20:29:01.598987  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:01.599002  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:01.599015  103895 round_trippers.go:473]     Content-Type: application/json
	I0108 20:29:01.599027  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:01.601978  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:01.602011  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:01.602021  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:01.602030  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:01.602038  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:01.602046  103895 round_trippers.go:580]     Content-Length: 1220
	I0108 20:29:01.602055  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:01 GMT
	I0108 20:29:01.602072  103895 round_trippers.go:580]     Audit-Id: 8fa70238-efdd-400f-97c7-3ad1c68e17f6
	I0108 20:29:01.602080  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:01.602345  103895 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"005e53eb-ae0e-4184-a542-aaa09b041e64","resourceVersion":"405","creationTimestamp":"2024-01-08T20:29:01Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T20:29:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0108 20:29:01.823136  103895 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0108 20:29:01.828785  103895 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0108 20:29:01.836716  103895 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0108 20:29:01.850892  103895 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0108 20:29:01.858900  103895 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0108 20:29:01.869291  103895 command_runner.go:130] > pod/storage-provisioner created
	I0108 20:29:01.876715  103895 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0108 20:29:01.875322  103895 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17907-11003/kubeconfig
	I0108 20:29:01.878322  103895 addons.go:508] enable addons completed in 1.270993865s: enabled=[default-storageclass storage-provisioner]
	I0108 20:29:01.877060  103895 kapi.go:59] client config for multinode-209824: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/client.key", CAFile:"/home/jenkins/minikube-integration/17907-11003/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:29:01.878617  103895 node_ready.go:35] waiting up to 6m0s for node "multinode-209824" to be "Ready" ...
	I0108 20:29:01.878721  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:01.878731  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:01.878743  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:01.878754  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:01.881282  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:01.881310  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:01.881329  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:01.881338  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:01.881347  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:01 GMT
	I0108 20:29:01.881356  103895 round_trippers.go:580]     Audit-Id: c903313b-de76-4fea-bd3e-b2a35c2a514d
	I0108 20:29:01.881366  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:01.881380  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:01.881576  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:02.379127  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:02.379156  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:02.379164  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:02.379170  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:02.381763  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:02.381792  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:02.381804  103895 round_trippers.go:580]     Audit-Id: f84cad8f-809e-44b2-a7b4-5cbaebb32b9a
	I0108 20:29:02.381817  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:02.381824  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:02.381832  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:02.381840  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:02.381848  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:02 GMT
	I0108 20:29:02.381968  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:02.879631  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:02.879659  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:02.879666  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:02.879686  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:02.882637  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:02.882672  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:02.882685  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:02.882695  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:02.882704  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:02.882713  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:02.882731  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:02 GMT
	I0108 20:29:02.882741  103895 round_trippers.go:580]     Audit-Id: c6442e40-109c-4521-a5b8-b7c62cc9c52d
	I0108 20:29:02.882920  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:03.379438  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:03.379469  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:03.379478  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:03.379484  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:03.382858  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:03.382888  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:03.382896  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:03.382902  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:03.382922  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:03 GMT
	I0108 20:29:03.382930  103895 round_trippers.go:580]     Audit-Id: 54211230-cfbc-4da8-a1b0-38509ac97843
	I0108 20:29:03.382942  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:03.382960  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:03.383147  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:03.878936  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:03.878962  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:03.878970  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:03.878975  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:03.881529  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:03.881550  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:03.881559  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:03.881573  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:03.881583  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:03 GMT
	I0108 20:29:03.881592  103895 round_trippers.go:580]     Audit-Id: 134540cd-154c-45f7-9e3e-2067251aedbf
	I0108 20:29:03.881600  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:03.881613  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:03.881741  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:03.882161  103895 node_ready.go:58] node "multinode-209824" has status "Ready":"False"
	I0108 20:29:04.378999  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:04.379045  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:04.379058  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:04.379069  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:04.382438  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:04.382488  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:04.382500  103895 round_trippers.go:580]     Audit-Id: bb1d0452-0720-4f55-adb7-f08c31e554ab
	I0108 20:29:04.382510  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:04.382518  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:04.382526  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:04.382543  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:04.382557  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:04 GMT
	I0108 20:29:04.382689  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:04.879233  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:04.879261  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:04.879268  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:04.879275  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:04.881548  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:04.881569  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:04.881579  103895 round_trippers.go:580]     Audit-Id: b64b0809-56e0-4dcc-ae72-0f3a534e2ebb
	I0108 20:29:04.881586  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:04.881593  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:04.881600  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:04.881608  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:04.881616  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:04 GMT
	I0108 20:29:04.881741  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:05.379279  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:05.379302  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:05.379310  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:05.379316  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:05.381537  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:05.381557  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:05.381567  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:05.381574  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:05.381582  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:05.381589  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:05.381596  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:05 GMT
	I0108 20:29:05.381604  103895 round_trippers.go:580]     Audit-Id: 9b04a7ed-e14b-404d-ab88-45ea547faa9f
	I0108 20:29:05.381715  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:05.878929  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:05.878961  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:05.878970  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:05.878979  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:05.881803  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:05.881844  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:05.881853  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:05.881859  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:05 GMT
	I0108 20:29:05.881865  103895 round_trippers.go:580]     Audit-Id: eb63fb14-7d94-4952-b009-c14c54e4fdfe
	I0108 20:29:05.881870  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:05.881877  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:05.881885  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:05.882062  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:05.882494  103895 node_ready.go:58] node "multinode-209824" has status "Ready":"False"
	I0108 20:29:06.379830  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:06.379864  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:06.379874  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:06.379882  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:06.383208  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:06.383232  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:06.383240  103895 round_trippers.go:580]     Audit-Id: 5453aa64-f11e-4c47-ba37-55092a8ae283
	I0108 20:29:06.383246  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:06.383252  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:06.383257  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:06.383262  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:06.383267  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:06 GMT
	I0108 20:29:06.383476  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:06.879159  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:06.879198  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:06.879210  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:06.879218  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:06.882851  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:06.882895  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:06.882907  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:06.882917  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:06.882926  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:06 GMT
	I0108 20:29:06.882935  103895 round_trippers.go:580]     Audit-Id: fab1368b-16c6-4581-b416-0b7cac909d43
	I0108 20:29:06.882944  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:06.882952  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:06.883152  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:07.379823  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:07.379863  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:07.379872  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:07.379878  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:07.383424  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:07.383464  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:07.383476  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:07.383485  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:07 GMT
	I0108 20:29:07.383493  103895 round_trippers.go:580]     Audit-Id: 5308f4f2-40a2-4ca4-9e26-8ca303b51ff7
	I0108 20:29:07.383501  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:07.383508  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:07.383517  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:07.383689  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:07.879397  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:07.879451  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:07.879465  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:07.879482  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:07.882973  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:07.883010  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:07.883023  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:07.883033  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:07.883042  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:07.883049  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:07.883056  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:07 GMT
	I0108 20:29:07.883064  103895 round_trippers.go:580]     Audit-Id: bb32a51f-2ea3-4c6f-9fd4-8fdc29ae04bc
	I0108 20:29:07.883242  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:07.883625  103895 node_ready.go:58] node "multinode-209824" has status "Ready":"False"
	I0108 20:29:08.379813  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:08.379848  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:08.379858  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:08.379866  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:08.382843  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:08.382871  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:08.382882  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:08.382891  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:08.382900  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:08.382909  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:08 GMT
	I0108 20:29:08.382918  103895 round_trippers.go:580]     Audit-Id: 1a586e36-f27e-4129-a248-05b63e46977f
	I0108 20:29:08.382927  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:08.383053  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:08.878903  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:08.878931  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:08.878939  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:08.878945  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:08.881401  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:08.881430  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:08.881438  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:08.881444  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:08.881451  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:08 GMT
	I0108 20:29:08.881456  103895 round_trippers.go:580]     Audit-Id: 0c15f1c8-b901-492d-8835-8e01509f0444
	I0108 20:29:08.881462  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:08.881467  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:08.881645  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:09.379319  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:09.379353  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:09.379386  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:09.379401  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:09.382141  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:09.382166  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:09.382176  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:09.382183  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:09.382190  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:09.382197  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:09.382205  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:09 GMT
	I0108 20:29:09.382214  103895 round_trippers.go:580]     Audit-Id: e476713d-4787-4d18-9cb4-6934df61e856
	I0108 20:29:09.382357  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:09.879073  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:09.879120  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:09.879132  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:09.879142  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:09.881977  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:09.882012  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:09.882024  103895 round_trippers.go:580]     Audit-Id: fcf1c30d-a204-4022-babf-add0196a6a9f
	I0108 20:29:09.882033  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:09.882042  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:09.882052  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:09.882061  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:09.882070  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:09 GMT
	I0108 20:29:09.882221  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:10.379832  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:10.379862  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:10.379871  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:10.379877  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:10.382602  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:10.382628  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:10.382637  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:10.382646  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:10.382655  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:10.382664  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:10 GMT
	I0108 20:29:10.382673  103895 round_trippers.go:580]     Audit-Id: 9d0eb5ee-9b60-47ba-a71d-9d08cfc61992
	I0108 20:29:10.382685  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:10.382831  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:10.383195  103895 node_ready.go:58] node "multinode-209824" has status "Ready":"False"
	I0108 20:29:10.879731  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:10.879758  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:10.879767  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:10.879773  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:10.882719  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:10.882749  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:10.882760  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:10.882767  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:10.882774  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:10.882786  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:10 GMT
	I0108 20:29:10.882794  103895 round_trippers.go:580]     Audit-Id: 28762177-e6aa-468b-99c5-63a110f873a1
	I0108 20:29:10.882803  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:10.882972  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:11.379638  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:11.379668  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:11.379678  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:11.379686  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:11.381977  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:11.382003  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:11.382013  103895 round_trippers.go:580]     Audit-Id: 1f38d207-771d-49f9-b3bb-d5ecb25bf908
	I0108 20:29:11.382021  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:11.382029  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:11.382038  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:11.382050  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:11.382057  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:11 GMT
	I0108 20:29:11.382171  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:11.878929  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:11.878976  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:11.878984  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:11.878991  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:11.882179  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:11.882209  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:11.882220  103895 round_trippers.go:580]     Audit-Id: 9652c3d8-01e8-4e33-8de9-28543ca09d21
	I0108 20:29:11.882227  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:11.882232  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:11.882237  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:11.882243  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:11.882250  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:11 GMT
	I0108 20:29:11.882503  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:12.379203  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:12.379246  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:12.379254  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:12.379262  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:12.382630  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:12.382667  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:12.382678  103895 round_trippers.go:580]     Audit-Id: 6007e894-9cd2-498f-8a3a-4bbc482d12c6
	I0108 20:29:12.382684  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:12.382690  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:12.382695  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:12.382701  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:12.382706  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:12 GMT
	I0108 20:29:12.382926  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:12.383384  103895 node_ready.go:58] node "multinode-209824" has status "Ready":"False"
	I0108 20:29:12.879682  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:12.879716  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:12.879725  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:12.879735  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:12.882228  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:12.882252  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:12.882262  103895 round_trippers.go:580]     Audit-Id: f49306ac-d20e-4064-b514-b98337bea8ae
	I0108 20:29:12.882276  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:12.882285  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:12.882291  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:12.882297  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:12.882308  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:12 GMT
	I0108 20:29:12.882429  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:13.378979  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:13.379008  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:13.379017  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:13.379023  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:13.382220  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:13.382250  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:13.382258  103895 round_trippers.go:580]     Audit-Id: 73cf2b45-fb48-4258-96d3-add1c93a7039
	I0108 20:29:13.382265  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:13.382270  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:13.382275  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:13.382281  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:13.382288  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:13 GMT
	I0108 20:29:13.382531  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:13.879633  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:13.879669  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:13.879679  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:13.879686  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:13.882866  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:13.882887  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:13.882894  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:13.882900  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:13 GMT
	I0108 20:29:13.882905  103895 round_trippers.go:580]     Audit-Id: 1cb6bb45-b1d1-4347-a1c5-bf3075dfb413
	I0108 20:29:13.882910  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:13.882915  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:13.882922  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:13.883147  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:14.379792  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:14.379830  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:14.379840  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:14.379849  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:14.382552  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:14.382577  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:14.382589  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:14.382598  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:14.382605  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:14.382614  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:14.382622  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:14 GMT
	I0108 20:29:14.382630  103895 round_trippers.go:580]     Audit-Id: 14185344-a447-4af7-ad41-a7f0de62512e
	I0108 20:29:14.382791  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:14.879554  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:14.879578  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:14.879587  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:14.879593  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:14.882397  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:14.882423  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:14.882434  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:14.882443  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:14 GMT
	I0108 20:29:14.882451  103895 round_trippers.go:580]     Audit-Id: 2c320ea1-2635-4125-9ff5-d9640cd59246
	I0108 20:29:14.882459  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:14.882466  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:14.882483  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:14.882729  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:14.883158  103895 node_ready.go:58] node "multinode-209824" has status "Ready":"False"
	I0108 20:29:15.379465  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:15.379504  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:15.379517  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:15.379525  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:15.382664  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:15.382689  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:15.382699  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:15.382704  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:15 GMT
	I0108 20:29:15.382713  103895 round_trippers.go:580]     Audit-Id: 208f75b2-e57f-45d2-a002-1e503a14b6fa
	I0108 20:29:15.382722  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:15.382731  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:15.382739  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:15.382894  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:15.879557  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:15.879599  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:15.879606  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:15.879612  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:15.882130  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:15.882155  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:15.882166  103895 round_trippers.go:580]     Audit-Id: ee6609c9-4f98-408c-b579-e8578af31546
	I0108 20:29:15.882175  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:15.882184  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:15.882193  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:15.882201  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:15.882207  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:15 GMT
	I0108 20:29:15.882379  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:16.378855  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:16.378893  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:16.378901  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:16.378908  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:16.382177  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:16.382207  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:16.382221  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:16.382229  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:16 GMT
	I0108 20:29:16.382237  103895 round_trippers.go:580]     Audit-Id: 791543d2-5acc-4f1a-af7a-e1341f53a0c3
	I0108 20:29:16.382245  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:16.382253  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:16.382263  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:16.382488  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:16.879019  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:16.879061  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:16.879070  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:16.879076  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:16.882243  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:16.882263  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:16.882270  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:16.882276  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:16.882282  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:16.882288  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:16.882293  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:16 GMT
	I0108 20:29:16.882298  103895 round_trippers.go:580]     Audit-Id: 50b5b13a-5884-4dee-9e64-3fd83769c5a9
	I0108 20:29:16.882460  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:17.379144  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:17.379185  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:17.379194  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:17.379203  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:17.381679  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:17.381703  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:17.381713  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:17.381722  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:17.381730  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:17 GMT
	I0108 20:29:17.381738  103895 round_trippers.go:580]     Audit-Id: 490bd994-ab0d-487a-a0a9-3061544e8083
	I0108 20:29:17.381747  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:17.381762  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:17.381898  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:17.382227  103895 node_ready.go:58] node "multinode-209824" has status "Ready":"False"
	I0108 20:29:17.879576  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:17.879604  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:17.879612  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:17.879618  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:17.882591  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:17.882617  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:17.882623  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:17.882629  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:17.882634  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:17.882641  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:17.882649  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:17 GMT
	I0108 20:29:17.882658  103895 round_trippers.go:580]     Audit-Id: 495f4b62-6143-4846-99fd-f31123ebb990
	I0108 20:29:17.882922  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:18.379567  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:18.379607  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:18.379618  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:18.379626  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:18.382713  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:18.382745  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:18.382756  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:18.382765  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:18.382774  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:18.382783  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:18.382792  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:18 GMT
	I0108 20:29:18.382801  103895 round_trippers.go:580]     Audit-Id: b978d0c0-0188-4337-affb-3e153bd05e2e
	I0108 20:29:18.382959  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:18.879790  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:18.879836  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:18.879847  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:18.879856  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:18.883029  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:18.883065  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:18.883074  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:18 GMT
	I0108 20:29:18.883080  103895 round_trippers.go:580]     Audit-Id: f0065f1d-2188-4f94-8f27-c82d8ecd31cd
	I0108 20:29:18.883086  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:18.883092  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:18.883097  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:18.883103  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:18.883334  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:19.378910  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:19.378948  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:19.378960  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:19.378972  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:19.381809  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:19.381833  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:19.381843  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:19 GMT
	I0108 20:29:19.381851  103895 round_trippers.go:580]     Audit-Id: 592ad16a-3659-4fd2-a009-740f4246294c
	I0108 20:29:19.381862  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:19.381869  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:19.381876  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:19.381885  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:19.382041  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:19.382385  103895 node_ready.go:58] node "multinode-209824" has status "Ready":"False"
	I0108 20:29:19.879708  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:19.879734  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:19.879741  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:19.879749  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:19.882797  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:19.882823  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:19.882833  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:19.882858  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:19.882866  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:19.882875  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:19 GMT
	I0108 20:29:19.882887  103895 round_trippers.go:580]     Audit-Id: d182f827-9ac2-4968-89c2-599b40fbc1d1
	I0108 20:29:19.882894  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:19.883037  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:20.379735  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:20.379777  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:20.379788  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:20.379796  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:20.383040  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:20.383062  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:20.383069  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:20 GMT
	I0108 20:29:20.383075  103895 round_trippers.go:580]     Audit-Id: e448f925-7ad8-4f19-9699-7dafbaed96e1
	I0108 20:29:20.383080  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:20.383086  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:20.383091  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:20.383097  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:20.383300  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:20.879243  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:20.879276  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:20.879285  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:20.879291  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:20.881987  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:20.882021  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:20.882033  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:20 GMT
	I0108 20:29:20.882042  103895 round_trippers.go:580]     Audit-Id: bc124089-5a3f-4279-a51c-602eeac10fc2
	I0108 20:29:20.882049  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:20.882060  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:20.882068  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:20.882075  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:20.882244  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:21.378889  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:21.378921  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:21.378929  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:21.378935  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:21.381303  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:21.381332  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:21.381342  103895 round_trippers.go:580]     Audit-Id: 3aef15f8-f672-4b3d-a944-9d8fe4f6b410
	I0108 20:29:21.381351  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:21.381359  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:21.381367  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:21.381374  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:21.381383  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:21 GMT
	I0108 20:29:21.381480  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:21.879084  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:21.879123  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:21.879131  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:21.879138  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:21.882471  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:21.882508  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:21.882520  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:21.882530  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:21.882539  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:21.882548  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:21 GMT
	I0108 20:29:21.882557  103895 round_trippers.go:580]     Audit-Id: 10f8724d-9fc4-4b51-9d62-1ab9956d61c4
	I0108 20:29:21.882566  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:21.882751  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:21.883145  103895 node_ready.go:58] node "multinode-209824" has status "Ready":"False"
	I0108 20:29:22.379465  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:22.379499  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:22.379515  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:22.379521  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:22.382829  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:22.382862  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:22.382871  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:22 GMT
	I0108 20:29:22.382877  103895 round_trippers.go:580]     Audit-Id: 1d39e022-36d7-42e8-9d9a-17aafb688250
	I0108 20:29:22.382882  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:22.382887  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:22.382893  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:22.382898  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:22.383133  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:22.879818  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:22.879860  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:22.879875  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:22.879883  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:22.883271  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:22.883309  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:22.883322  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:22.883331  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:22.883340  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:22.883349  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:22 GMT
	I0108 20:29:22.883380  103895 round_trippers.go:580]     Audit-Id: cf61adf1-3b48-4705-a606-4623a32a6814
	I0108 20:29:22.883399  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:22.883589  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:23.379107  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:23.379150  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:23.379161  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:23.379177  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:23.382073  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:23.382111  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:23.382123  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:23.382133  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:23.382142  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:23 GMT
	I0108 20:29:23.382148  103895 round_trippers.go:580]     Audit-Id: e59bd0bf-62c9-4e53-b822-aa12ea6e483c
	I0108 20:29:23.382156  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:23.382162  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:23.382365  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:23.879584  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:23.879616  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:23.879624  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:23.879630  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:23.882532  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:23.882589  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:23.882600  103895 round_trippers.go:580]     Audit-Id: 8b3593de-2dc0-4c99-9594-2bcaf62adc02
	I0108 20:29:23.882606  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:23.882612  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:23.882618  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:23.882624  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:23.882630  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:23 GMT
	I0108 20:29:23.882790  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:23.883272  103895 node_ready.go:58] node "multinode-209824" has status "Ready":"False"
	I0108 20:29:24.379259  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:24.379284  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:24.379292  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:24.379298  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:24.381733  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:24.381760  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:24.381770  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:24.381779  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:24.381784  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:24 GMT
	I0108 20:29:24.381790  103895 round_trippers.go:580]     Audit-Id: 001ce5e2-bec0-4d9c-9288-691b5c339c57
	I0108 20:29:24.381795  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:24.381803  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:24.381980  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:24.879651  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:24.879682  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:24.879691  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:24.879697  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:24.882635  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:24.882656  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:24.882663  103895 round_trippers.go:580]     Audit-Id: df83cc51-628b-497a-b3aa-9b0f741ff33c
	I0108 20:29:24.882669  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:24.882674  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:24.882678  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:24.882684  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:24.882693  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:24 GMT
	I0108 20:29:24.882913  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:25.379531  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:25.379557  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:25.379565  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:25.379571  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:25.381776  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:25.381795  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:25.381802  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:25.381807  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:25.381812  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:25.381819  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:25 GMT
	I0108 20:29:25.381826  103895 round_trippers.go:580]     Audit-Id: 52c715ab-d1ea-4bc7-93b9-89e300c5f8d7
	I0108 20:29:25.381834  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:25.382019  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:25.879840  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:25.879883  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:25.879895  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:25.879906  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:25.884647  103895 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 20:29:25.884689  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:25.884702  103895 round_trippers.go:580]     Audit-Id: f4d9f645-35a9-4c70-ad16-c1308d3f3b32
	I0108 20:29:25.884711  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:25.884720  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:25.884729  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:25.884738  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:25.884747  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:25 GMT
	I0108 20:29:25.884981  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:25.885383  103895 node_ready.go:58] node "multinode-209824" has status "Ready":"False"
	I0108 20:29:26.379677  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:26.379710  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:26.379732  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:26.379740  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:26.382293  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:26.382318  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:26.382325  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:26.382331  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:26.382336  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:26.382342  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:26.382347  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:26 GMT
	I0108 20:29:26.382352  103895 round_trippers.go:580]     Audit-Id: 39bf16fe-8b7e-4969-91ad-5d32dc7a518a
	I0108 20:29:26.382529  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:26.879222  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:26.879272  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:26.879284  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:26.879296  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:26.882640  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:26.882683  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:26.882695  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:26.882705  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:26.882713  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:26.882718  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:26 GMT
	I0108 20:29:26.882724  103895 round_trippers.go:580]     Audit-Id: c087015d-c174-4f7f-bb00-0abadd6fb1bc
	I0108 20:29:26.882729  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:26.882973  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:27.379689  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:27.379731  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:27.379741  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:27.379748  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:27.383641  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:27.383677  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:27.383688  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:27.383698  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:27 GMT
	I0108 20:29:27.383707  103895 round_trippers.go:580]     Audit-Id: 0e7c3a8d-bb1a-4390-9ec8-69bb8692eb4d
	I0108 20:29:27.383715  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:27.383723  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:27.383732  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:27.383933  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:27.879659  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:27.879713  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:27.879722  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:27.879728  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:27.883294  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:27.883334  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:27.883346  103895 round_trippers.go:580]     Audit-Id: f674cd29-ebcf-4040-96c6-fdfe9cb64167
	I0108 20:29:27.883375  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:27.883385  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:27.883394  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:27.883403  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:27.883412  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:27 GMT
	I0108 20:29:27.883584  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:28.379771  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:28.379809  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:28.379818  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:28.379827  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:28.382756  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:28.382796  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:28.382807  103895 round_trippers.go:580]     Audit-Id: 2c810b7c-00af-44f8-bf50-7737a8eaed56
	I0108 20:29:28.382816  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:28.382823  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:28.382832  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:28.382840  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:28.382848  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:28 GMT
	I0108 20:29:28.383032  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:28.383486  103895 node_ready.go:58] node "multinode-209824" has status "Ready":"False"
	I0108 20:29:28.879908  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:28.879945  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:28.879955  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:28.879961  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:28.882847  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:28.882868  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:28.882875  103895 round_trippers.go:580]     Audit-Id: 5dd92346-9597-4ede-8186-1ecd32a8d2e7
	I0108 20:29:28.882881  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:28.882886  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:28.882892  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:28.882897  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:28.882902  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:28 GMT
	I0108 20:29:28.883089  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:29.379951  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:29.379993  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:29.380005  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:29.380011  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:29.382792  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:29.382811  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:29.382818  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:29.382824  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:29 GMT
	I0108 20:29:29.382829  103895 round_trippers.go:580]     Audit-Id: ef7d00cd-fe11-4c05-9d84-8623ef618abb
	I0108 20:29:29.382834  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:29.382839  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:29.382847  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:29.382983  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:29.879761  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:29.879793  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:29.879801  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:29.879807  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:29.882992  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:29.883024  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:29.883035  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:29.883043  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:29.883050  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:29 GMT
	I0108 20:29:29.883058  103895 round_trippers.go:580]     Audit-Id: fc48b9da-acb3-4c36-9020-9afa73d4bbcf
	I0108 20:29:29.883066  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:29.883077  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:29.883248  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:30.379890  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:30.379924  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:30.379933  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:30.379939  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:30.382639  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:30.382670  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:30.382681  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:30.382689  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:30.382697  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:30.382705  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:30 GMT
	I0108 20:29:30.382713  103895 round_trippers.go:580]     Audit-Id: 9008db23-e003-4bed-85d6-e5178b8fec31
	I0108 20:29:30.382722  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:30.382858  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:30.879742  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:30.879776  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:30.879785  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:30.879792  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:30.882778  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:30.882808  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:30.882817  103895 round_trippers.go:580]     Audit-Id: e551bab3-2fea-4ff5-a7bd-3e3bab5c5c1e
	I0108 20:29:30.882822  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:30.882828  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:30.882833  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:30.882839  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:30.882844  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:30 GMT
	I0108 20:29:30.883070  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:30.883473  103895 node_ready.go:58] node "multinode-209824" has status "Ready":"False"
	I0108 20:29:31.379964  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:31.380002  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:31.380016  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:31.380027  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:31.383154  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:31.383177  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:31.383184  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:31.383190  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:31.383196  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:31 GMT
	I0108 20:29:31.383201  103895 round_trippers.go:580]     Audit-Id: 8d42733b-2b32-4735-825f-efe8a6f83812
	I0108 20:29:31.383206  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:31.383211  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:31.383394  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:31.879024  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:31.879065  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:31.879076  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:31.879083  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:31.882637  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:31.882660  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:31.882667  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:31.882673  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:31.882680  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:31 GMT
	I0108 20:29:31.882688  103895 round_trippers.go:580]     Audit-Id: f48a7cff-c9cc-4f56-bd2e-22436e4eba8d
	I0108 20:29:31.882695  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:31.882703  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:31.882863  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"373","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0108 20:29:32.379665  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:32.379711  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:32.379722  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:32.379729  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:32.382358  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:32.382382  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:32.382394  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:32.382402  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:32 GMT
	I0108 20:29:32.382430  103895 round_trippers.go:580]     Audit-Id: 1b08fb88-f5ee-4f05-b8a7-fd31e551142e
	I0108 20:29:32.382442  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:32.382450  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:32.382462  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:32.382584  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"434","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 20:29:32.382901  103895 node_ready.go:49] node "multinode-209824" has status "Ready":"True"
	I0108 20:29:32.382919  103895 node_ready.go:38] duration metric: took 30.504285027s waiting for node "multinode-209824" to be "Ready" ...
	I0108 20:29:32.382928  103895 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:29:32.383019  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 20:29:32.383030  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:32.383040  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:32.383048  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:32.386076  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:32.386098  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:32.386109  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:32 GMT
	I0108 20:29:32.386117  103895 round_trippers.go:580]     Audit-Id: 8eac4731-7eb7-45fc-a8e1-ce8cb159cbd6
	I0108 20:29:32.386124  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:32.386133  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:32.386141  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:32.386153  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:32.386519  103895 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"440"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ds62v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"926641e0-32b5-4e31-8361-c677061ec067","resourceVersion":"440","creationTimestamp":"2024-01-08T20:29:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f4c279e9-4bc0-4d2f-a359-efd57f369ce4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4c279e9-4bc0-4d2f-a359-efd57f369ce4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I0108 20:29:32.389609  103895 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ds62v" in "kube-system" namespace to be "Ready" ...
	I0108 20:29:32.389743  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ds62v
	I0108 20:29:32.389752  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:32.389758  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:32.389764  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:32.391756  103895 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 20:29:32.391769  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:32.391775  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:32 GMT
	I0108 20:29:32.391783  103895 round_trippers.go:580]     Audit-Id: 6e49a616-4a7a-4499-8921-cbb3c25e4fa4
	I0108 20:29:32.391791  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:32.391802  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:32.391813  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:32.391822  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:32.391933  103895 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ds62v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"926641e0-32b5-4e31-8361-c677061ec067","resourceVersion":"440","creationTimestamp":"2024-01-08T20:29:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f4c279e9-4bc0-4d2f-a359-efd57f369ce4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4c279e9-4bc0-4d2f-a359-efd57f369ce4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0108 20:29:32.392450  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:32.392468  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:32.392475  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:32.392482  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:32.394591  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:32.394613  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:32.394623  103895 round_trippers.go:580]     Audit-Id: 83be7b99-c8b8-4ac3-8f51-1ed6c7db2d05
	I0108 20:29:32.394631  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:32.394640  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:32.394657  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:32.394666  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:32.394675  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:32 GMT
	I0108 20:29:32.394890  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"434","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 20:29:32.890500  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ds62v
	I0108 20:29:32.890530  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:32.890541  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:32.890550  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:32.893626  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:32.893656  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:32.893667  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:32.893673  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:32.893681  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:32.893691  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:32 GMT
	I0108 20:29:32.893701  103895 round_trippers.go:580]     Audit-Id: e5d3c430-c86b-4983-aeb7-52ded1ed0f6a
	I0108 20:29:32.893719  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:32.893887  103895 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ds62v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"926641e0-32b5-4e31-8361-c677061ec067","resourceVersion":"440","creationTimestamp":"2024-01-08T20:29:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f4c279e9-4bc0-4d2f-a359-efd57f369ce4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4c279e9-4bc0-4d2f-a359-efd57f369ce4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0108 20:29:32.894542  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:32.894563  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:32.894574  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:32.894584  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:32.897041  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:32.897074  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:32.897092  103895 round_trippers.go:580]     Audit-Id: 4dede16b-d161-493d-bde5-3334726590ae
	I0108 20:29:32.897102  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:32.897110  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:32.897118  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:32.897125  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:32.897132  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:32 GMT
	I0108 20:29:32.897263  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"434","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 20:29:33.389867  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ds62v
	I0108 20:29:33.389899  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:33.389908  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:33.389914  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:33.393165  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:33.393202  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:33.393213  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:33.393222  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:33 GMT
	I0108 20:29:33.393229  103895 round_trippers.go:580]     Audit-Id: 8a3cf244-a88c-429b-a2b5-a4481932edeb
	I0108 20:29:33.393235  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:33.393244  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:33.393256  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:33.393427  103895 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ds62v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"926641e0-32b5-4e31-8361-c677061ec067","resourceVersion":"440","creationTimestamp":"2024-01-08T20:29:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f4c279e9-4bc0-4d2f-a359-efd57f369ce4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4c279e9-4bc0-4d2f-a359-efd57f369ce4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0108 20:29:33.394047  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:33.394068  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:33.394078  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:33.394086  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:33.397096  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:33.397129  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:33.397141  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:33.397151  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:33.397160  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:33 GMT
	I0108 20:29:33.397166  103895 round_trippers.go:580]     Audit-Id: ca1b035e-d49e-417f-bb46-8b9fd02cd17f
	I0108 20:29:33.397172  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:33.397182  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:33.397515  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"434","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 20:29:33.890608  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ds62v
	I0108 20:29:33.890635  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:33.890644  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:33.890682  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:33.893947  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:33.893985  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:33.893996  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:33.894004  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:33.894011  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:33 GMT
	I0108 20:29:33.894019  103895 round_trippers.go:580]     Audit-Id: 7bac5f44-1372-49e2-b859-631ac8f1f162
	I0108 20:29:33.894026  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:33.894034  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:33.894182  103895 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ds62v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"926641e0-32b5-4e31-8361-c677061ec067","resourceVersion":"453","creationTimestamp":"2024-01-08T20:29:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f4c279e9-4bc0-4d2f-a359-efd57f369ce4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4c279e9-4bc0-4d2f-a359-efd57f369ce4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0108 20:29:33.894740  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:33.894757  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:33.894764  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:33.894770  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:33.897321  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:33.897349  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:33.897364  103895 round_trippers.go:580]     Audit-Id: 9f580c41-795b-492e-b493-e0f9bff0d0a1
	I0108 20:29:33.897372  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:33.897379  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:33.897388  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:33.897403  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:33.897411  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:33 GMT
	I0108 20:29:33.897558  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"434","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 20:29:33.898075  103895 pod_ready.go:92] pod "coredns-5dd5756b68-ds62v" in "kube-system" namespace has status "Ready":"True"
	I0108 20:29:33.898128  103895 pod_ready.go:81] duration metric: took 1.508487864s waiting for pod "coredns-5dd5756b68-ds62v" in "kube-system" namespace to be "Ready" ...
	I0108 20:29:33.898151  103895 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-209824" in "kube-system" namespace to be "Ready" ...
	I0108 20:29:33.898253  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-209824
	I0108 20:29:33.898266  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:33.898278  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:33.898290  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:33.901316  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:33.901347  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:33.901359  103895 round_trippers.go:580]     Audit-Id: b6fe5dc5-3f1a-4c85-899f-32f1621edbeb
	I0108 20:29:33.901367  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:33.901377  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:33.901384  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:33.901392  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:33.901399  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:33 GMT
	I0108 20:29:33.901602  103895 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-209824","namespace":"kube-system","uid":"2ba4f928-8212-4851-a0d0-ecb6766b0d38","resourceVersion":"424","creationTimestamp":"2024-01-08T20:28:46Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"efd65551b99a6b027e44ff3fe2a4bac4","kubernetes.io/config.mirror":"efd65551b99a6b027e44ff3fe2a4bac4","kubernetes.io/config.seen":"2024-01-08T20:28:40.985955037Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:28:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0108 20:29:33.902163  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:33.902183  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:33.902191  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:33.902197  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:33.904706  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:33.904732  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:33.904741  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:33 GMT
	I0108 20:29:33.904747  103895 round_trippers.go:580]     Audit-Id: bf8669a7-43ba-4e71-be1e-732f3bf49a2d
	I0108 20:29:33.904752  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:33.904760  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:33.904771  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:33.904781  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:33.904994  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"434","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 20:29:33.905428  103895 pod_ready.go:92] pod "etcd-multinode-209824" in "kube-system" namespace has status "Ready":"True"
	I0108 20:29:33.905450  103895 pod_ready.go:81] duration metric: took 7.289285ms waiting for pod "etcd-multinode-209824" in "kube-system" namespace to be "Ready" ...
	I0108 20:29:33.905467  103895 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-209824" in "kube-system" namespace to be "Ready" ...
	I0108 20:29:33.905549  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-209824
	I0108 20:29:33.905560  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:33.905575  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:33.905585  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:33.907859  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:33.907877  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:33.907883  103895 round_trippers.go:580]     Audit-Id: f063da72-483c-44b6-95b3-48fe94705749
	I0108 20:29:33.907889  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:33.907894  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:33.907899  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:33.907905  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:33.907910  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:33 GMT
	I0108 20:29:33.908082  103895 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-209824","namespace":"kube-system","uid":"aceea26b-3461-4240-979e-c8aa9f77e8fb","resourceVersion":"398","creationTimestamp":"2024-01-08T20:28:47Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"cf7593793b45828fd4be9343f68cfb68","kubernetes.io/config.mirror":"cf7593793b45828fd4be9343f68cfb68","kubernetes.io/config.seen":"2024-01-08T20:28:47.413857407Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0108 20:29:33.908506  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:33.908519  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:33.908526  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:33.908531  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:33.910354  103895 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 20:29:33.910371  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:33.910380  103895 round_trippers.go:580]     Audit-Id: 49b72c1f-aaed-4e47-951d-72778d837798
	I0108 20:29:33.910391  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:33.910399  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:33.910408  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:33.910421  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:33.910429  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:33 GMT
	I0108 20:29:33.910549  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"434","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 20:29:33.910860  103895 pod_ready.go:92] pod "kube-apiserver-multinode-209824" in "kube-system" namespace has status "Ready":"True"
	I0108 20:29:33.910875  103895 pod_ready.go:81] duration metric: took 5.394015ms waiting for pod "kube-apiserver-multinode-209824" in "kube-system" namespace to be "Ready" ...
	I0108 20:29:33.910885  103895 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-209824" in "kube-system" namespace to be "Ready" ...
	I0108 20:29:33.910932  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-209824
	I0108 20:29:33.910939  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:33.910945  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:33.910951  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:33.913154  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:33.913178  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:33.913188  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:33 GMT
	I0108 20:29:33.913197  103895 round_trippers.go:580]     Audit-Id: 8b609516-1567-439a-84cf-d62ea0a23188
	I0108 20:29:33.913206  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:33.913222  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:33.913234  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:33.913242  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:33.913413  103895 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-209824","namespace":"kube-system","uid":"9e898128-2e03-41f5-8afc-23b34ee9e755","resourceVersion":"422","creationTimestamp":"2024-01-08T20:28:47Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e91228d8e0f7fca8a3a21815e52a30df","kubernetes.io/config.mirror":"e91228d8e0f7fca8a3a21815e52a30df","kubernetes.io/config.seen":"2024-01-08T20:28:47.413859105Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0108 20:29:33.913961  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:33.913977  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:33.913995  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:33.914008  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:33.916266  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:33.916284  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:33.916291  103895 round_trippers.go:580]     Audit-Id: 3c8b6e3b-2415-4b5c-9971-60ff2f8cd53a
	I0108 20:29:33.916296  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:33.916302  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:33.916308  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:33.916315  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:33.916323  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:33 GMT
	I0108 20:29:33.916487  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"434","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 20:29:33.916781  103895 pod_ready.go:92] pod "kube-controller-manager-multinode-209824" in "kube-system" namespace has status "Ready":"True"
	I0108 20:29:33.916798  103895 pod_ready.go:81] duration metric: took 5.907395ms waiting for pod "kube-controller-manager-multinode-209824" in "kube-system" namespace to be "Ready" ...
	I0108 20:29:33.916807  103895 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s267w" in "kube-system" namespace to be "Ready" ...
	I0108 20:29:33.916863  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s267w
	I0108 20:29:33.916870  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:33.916877  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:33.916885  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:33.919114  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:33.919132  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:33.919139  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:33.919145  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:33.919150  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:33 GMT
	I0108 20:29:33.919155  103895 round_trippers.go:580]     Audit-Id: 51ca9c8f-025f-4b60-9789-2db8a3eb1fb8
	I0108 20:29:33.919163  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:33.919172  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:33.919329  103895 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s267w","generateName":"kube-proxy-","namespace":"kube-system","uid":"825c87c7-7b31-44a0-9009-1603f045b6a8","resourceVersion":"409","creationTimestamp":"2024-01-08T20:29:00Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b9cbae20-0613-456c-9bd2-0a174674a6ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9cbae20-0613-456c-9bd2-0a174674a6ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0108 20:29:33.980132  103895 request.go:629] Waited for 60.293655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:33.980240  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:33.980255  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:33.980266  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:33.980276  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:33.983070  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:33.983093  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:33.983100  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:33.983106  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:33.983111  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:33.983116  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:33 GMT
	I0108 20:29:33.983121  103895 round_trippers.go:580]     Audit-Id: 9e2569e7-c3b6-4d7e-8898-a50c1b671738
	I0108 20:29:33.983125  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:33.983256  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"434","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 20:29:33.983632  103895 pod_ready.go:92] pod "kube-proxy-s267w" in "kube-system" namespace has status "Ready":"True"
	I0108 20:29:33.983651  103895 pod_ready.go:81] duration metric: took 66.83824ms waiting for pod "kube-proxy-s267w" in "kube-system" namespace to be "Ready" ...
	I0108 20:29:33.983662  103895 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-209824" in "kube-system" namespace to be "Ready" ...
	I0108 20:29:34.180201  103895 request.go:629] Waited for 196.435461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-209824
	I0108 20:29:34.180318  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-209824
	I0108 20:29:34.180323  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:34.180332  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:34.180342  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:34.185905  103895 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 20:29:34.185939  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:34.185949  103895 round_trippers.go:580]     Audit-Id: a827fec0-3752-41ae-8e44-a85807710cd4
	I0108 20:29:34.185958  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:34.185966  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:34.185974  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:34.185981  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:34.185988  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:34 GMT
	I0108 20:29:34.186141  103895 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-209824","namespace":"kube-system","uid":"dfd223e6-f902-4432-bdd8-b39f4c0d276f","resourceVersion":"423","creationTimestamp":"2024-01-08T20:28:47Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"04a6ca48d9f71335316ab83231ec8e96","kubernetes.io/config.mirror":"04a6ca48d9f71335316ab83231ec8e96","kubernetes.io/config.seen":"2024-01-08T20:28:47.413849750Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0108 20:29:34.379858  103895 request.go:629] Waited for 193.284316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:34.379937  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:29:34.379942  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:34.379951  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:34.379958  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:34.383525  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:34.383555  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:34.383565  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:34.383573  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:34.383580  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:34.383588  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:34.383595  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:34 GMT
	I0108 20:29:34.383603  103895 round_trippers.go:580]     Audit-Id: 31b516d4-e8d1-41bd-a07e-81ab72b84e44
	I0108 20:29:34.383729  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"434","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 20:29:34.384112  103895 pod_ready.go:92] pod "kube-scheduler-multinode-209824" in "kube-system" namespace has status "Ready":"True"
	I0108 20:29:34.384136  103895 pod_ready.go:81] duration metric: took 400.464525ms waiting for pod "kube-scheduler-multinode-209824" in "kube-system" namespace to be "Ready" ...
	I0108 20:29:34.384155  103895 pod_ready.go:38] duration metric: took 2.001180074s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:29:34.384187  103895 api_server.go:52] waiting for apiserver process to appear ...
	I0108 20:29:34.384276  103895 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:29:34.396367  103895 command_runner.go:130] > 1388
	I0108 20:29:34.397631  103895 api_server.go:72] duration metric: took 33.284452455s to wait for apiserver process to appear ...
	I0108 20:29:34.397662  103895 api_server.go:88] waiting for apiserver healthz status ...
	I0108 20:29:34.397691  103895 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0108 20:29:34.402612  103895 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0108 20:29:34.402727  103895 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0108 20:29:34.402740  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:34.402753  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:34.402764  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:34.403908  103895 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 20:29:34.403927  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:34.403934  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:34.403940  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:34.403946  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:34.403952  103895 round_trippers.go:580]     Content-Length: 264
	I0108 20:29:34.403961  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:34 GMT
	I0108 20:29:34.403969  103895 round_trippers.go:580]     Audit-Id: 275b009f-9b2d-47d1-8f8e-dbd991f682aa
	I0108 20:29:34.403979  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:34.404009  103895 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0108 20:29:34.404142  103895 api_server.go:141] control plane version: v1.28.4
	I0108 20:29:34.404166  103895 api_server.go:131] duration metric: took 6.496143ms to wait for apiserver health ...
	I0108 20:29:34.404184  103895 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 20:29:34.580722  103895 request.go:629] Waited for 176.404989ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 20:29:34.580815  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 20:29:34.580820  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:34.580829  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:34.580836  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:34.584523  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:34.584547  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:34.584557  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:34.584565  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:34.584584  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:34 GMT
	I0108 20:29:34.584593  103895 round_trippers.go:580]     Audit-Id: 00f6ae50-75ce-47b0-9bf0-f34de764a1e8
	I0108 20:29:34.584603  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:34.584609  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:34.585157  103895 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"457"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ds62v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"926641e0-32b5-4e31-8361-c677061ec067","resourceVersion":"453","creationTimestamp":"2024-01-08T20:29:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f4c279e9-4bc0-4d2f-a359-efd57f369ce4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4c279e9-4bc0-4d2f-a359-efd57f369ce4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0108 20:29:34.586879  103895 system_pods.go:59] 8 kube-system pods found
	I0108 20:29:34.586901  103895 system_pods.go:61] "coredns-5dd5756b68-ds62v" [926641e0-32b5-4e31-8361-c677061ec067] Running
	I0108 20:29:34.586905  103895 system_pods.go:61] "etcd-multinode-209824" [2ba4f928-8212-4851-a0d0-ecb6766b0d38] Running
	I0108 20:29:34.586909  103895 system_pods.go:61] "kindnet-k59d5" [cc861346-590e-440e-b826-f9a35f006571] Running
	I0108 20:29:34.586914  103895 system_pods.go:61] "kube-apiserver-multinode-209824" [aceea26b-3461-4240-979e-c8aa9f77e8fb] Running
	I0108 20:29:34.586918  103895 system_pods.go:61] "kube-controller-manager-multinode-209824" [9e898128-2e03-41f5-8afc-23b34ee9e755] Running
	I0108 20:29:34.586925  103895 system_pods.go:61] "kube-proxy-s267w" [825c87c7-7b31-44a0-9009-1603f045b6a8] Running
	I0108 20:29:34.586929  103895 system_pods.go:61] "kube-scheduler-multinode-209824" [dfd223e6-f902-4432-bdd8-b39f4c0d276f] Running
	I0108 20:29:34.586933  103895 system_pods.go:61] "storage-provisioner" [64668c85-5cc7-4433-afea-3398724f09d1] Running
	I0108 20:29:34.586938  103895 system_pods.go:74] duration metric: took 182.745835ms to wait for pod list to return data ...
	I0108 20:29:34.586951  103895 default_sa.go:34] waiting for default service account to be created ...
	I0108 20:29:34.780499  103895 request.go:629] Waited for 193.413745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0108 20:29:34.780614  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0108 20:29:34.780621  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:34.780630  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:34.780664  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:34.783801  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:34.783822  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:34.783829  103895 round_trippers.go:580]     Content-Length: 261
	I0108 20:29:34.783834  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:34 GMT
	I0108 20:29:34.783840  103895 round_trippers.go:580]     Audit-Id: 343f7452-d945-42a9-994c-eab995539b84
	I0108 20:29:34.783845  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:34.783850  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:34.783855  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:34.783860  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:34.783899  103895 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"458"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"e3176cae-bb5a-461f-8bed-e75c2d3906b3","resourceVersion":"334","creationTimestamp":"2024-01-08T20:29:00Z"}}]}
	I0108 20:29:34.784121  103895 default_sa.go:45] found service account: "default"
	I0108 20:29:34.784140  103895 default_sa.go:55] duration metric: took 197.184601ms for default service account to be created ...
	I0108 20:29:34.784149  103895 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 20:29:34.980678  103895 request.go:629] Waited for 196.444875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 20:29:34.980790  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 20:29:34.980797  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:34.980808  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:34.980829  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:34.984395  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:34.984419  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:34.984427  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:34.984433  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:34.984439  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:34.984446  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:34.984460  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:34 GMT
	I0108 20:29:34.984469  103895 round_trippers.go:580]     Audit-Id: e953995c-d56a-4573-927f-0f02bb4935eb
	I0108 20:29:34.984841  103895 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"458"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ds62v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"926641e0-32b5-4e31-8361-c677061ec067","resourceVersion":"453","creationTimestamp":"2024-01-08T20:29:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f4c279e9-4bc0-4d2f-a359-efd57f369ce4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4c279e9-4bc0-4d2f-a359-efd57f369ce4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0108 20:29:34.986518  103895 system_pods.go:86] 8 kube-system pods found
	I0108 20:29:34.986537  103895 system_pods.go:89] "coredns-5dd5756b68-ds62v" [926641e0-32b5-4e31-8361-c677061ec067] Running
	I0108 20:29:34.986544  103895 system_pods.go:89] "etcd-multinode-209824" [2ba4f928-8212-4851-a0d0-ecb6766b0d38] Running
	I0108 20:29:34.986549  103895 system_pods.go:89] "kindnet-k59d5" [cc861346-590e-440e-b826-f9a35f006571] Running
	I0108 20:29:34.986553  103895 system_pods.go:89] "kube-apiserver-multinode-209824" [aceea26b-3461-4240-979e-c8aa9f77e8fb] Running
	I0108 20:29:34.986559  103895 system_pods.go:89] "kube-controller-manager-multinode-209824" [9e898128-2e03-41f5-8afc-23b34ee9e755] Running
	I0108 20:29:34.986563  103895 system_pods.go:89] "kube-proxy-s267w" [825c87c7-7b31-44a0-9009-1603f045b6a8] Running
	I0108 20:29:34.986570  103895 system_pods.go:89] "kube-scheduler-multinode-209824" [dfd223e6-f902-4432-bdd8-b39f4c0d276f] Running
	I0108 20:29:34.986574  103895 system_pods.go:89] "storage-provisioner" [64668c85-5cc7-4433-afea-3398724f09d1] Running
	I0108 20:29:34.986581  103895 system_pods.go:126] duration metric: took 202.428029ms to wait for k8s-apps to be running ...
	I0108 20:29:34.986590  103895 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 20:29:34.986638  103895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:29:34.999116  103895 system_svc.go:56] duration metric: took 12.507764ms WaitForService to wait for kubelet.
	I0108 20:29:34.999158  103895 kubeadm.go:581] duration metric: took 33.885986928s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 20:29:34.999207  103895 node_conditions.go:102] verifying NodePressure condition ...
	I0108 20:29:35.180571  103895 request.go:629] Waited for 181.279093ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0108 20:29:35.180663  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0108 20:29:35.180671  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:35.180682  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:35.180695  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:35.184589  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:35.184704  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:35.184728  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:35.184743  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:35.184753  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:35 GMT
	I0108 20:29:35.184767  103895 round_trippers.go:580]     Audit-Id: b389c9d0-6a30-4b49-b5c4-b371170c2b01
	I0108 20:29:35.184779  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:35.184790  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:35.184961  103895 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"458"},"items":[{"metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"434","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I0108 20:29:35.185421  103895 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 20:29:35.185449  103895 node_conditions.go:123] node cpu capacity is 8
	I0108 20:29:35.185465  103895 node_conditions.go:105] duration metric: took 186.254238ms to run NodePressure ...
	I0108 20:29:35.185483  103895 start.go:228] waiting for startup goroutines ...
	I0108 20:29:35.185493  103895 start.go:233] waiting for cluster config update ...
	I0108 20:29:35.185506  103895 start.go:242] writing updated cluster config ...
	I0108 20:29:35.188216  103895 out.go:177] 
	I0108 20:29:35.190547  103895 config.go:182] Loaded profile config "multinode-209824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:29:35.190692  103895 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/config.json ...
	I0108 20:29:35.192851  103895 out.go:177] * Starting worker node multinode-209824-m02 in cluster multinode-209824
	I0108 20:29:35.194914  103895 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 20:29:35.196504  103895 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:29:35.198058  103895 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:29:35.198098  103895 cache.go:56] Caching tarball of preloaded images
	I0108 20:29:35.198099  103895 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0108 20:29:35.198236  103895 preload.go:174] Found /home/jenkins/minikube-integration/17907-11003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 20:29:35.198256  103895 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 20:29:35.198392  103895 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/config.json ...
	I0108 20:29:35.217745  103895 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0108 20:29:35.217784  103895 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0108 20:29:35.217819  103895 cache.go:194] Successfully downloaded all kic artifacts
	I0108 20:29:35.217876  103895 start.go:365] acquiring machines lock for multinode-209824-m02: {Name:mk97e78f75943c5c0e1adc98795d096f4ca76831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:29:35.218026  103895 start.go:369] acquired machines lock for "multinode-209824-m02" in 120.42µs
	I0108 20:29:35.218062  103895 start.go:93] Provisioning new machine with config: &{Name:multinode-209824 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-209824 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 20:29:35.218139  103895 start.go:125] createHost starting for "m02" (driver="docker")
	I0108 20:29:35.220862  103895 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0108 20:29:35.220986  103895 start.go:159] libmachine.API.Create for "multinode-209824" (driver="docker")
	I0108 20:29:35.221009  103895 client.go:168] LocalClient.Create starting
	I0108 20:29:35.221111  103895 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem
	I0108 20:29:35.221150  103895 main.go:141] libmachine: Decoding PEM data...
	I0108 20:29:35.221165  103895 main.go:141] libmachine: Parsing certificate...
	I0108 20:29:35.221217  103895 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17907-11003/.minikube/certs/cert.pem
	I0108 20:29:35.221236  103895 main.go:141] libmachine: Decoding PEM data...
	I0108 20:29:35.221247  103895 main.go:141] libmachine: Parsing certificate...
	I0108 20:29:35.221548  103895 cli_runner.go:164] Run: docker network inspect multinode-209824 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:29:35.242310  103895 network_create.go:77] Found existing network {name:multinode-209824 subnet:0xc002f31890 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0108 20:29:35.242365  103895 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-209824-m02" container
	I0108 20:29:35.242445  103895 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 20:29:35.260076  103895 cli_runner.go:164] Run: docker volume create multinode-209824-m02 --label name.minikube.sigs.k8s.io=multinode-209824-m02 --label created_by.minikube.sigs.k8s.io=true
	I0108 20:29:35.280187  103895 oci.go:103] Successfully created a docker volume multinode-209824-m02
	I0108 20:29:35.280269  103895 cli_runner.go:164] Run: docker run --rm --name multinode-209824-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-209824-m02 --entrypoint /usr/bin/test -v multinode-209824-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I0108 20:29:35.848539  103895 oci.go:107] Successfully prepared a docker volume multinode-209824-m02
	I0108 20:29:35.848599  103895 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:29:35.848630  103895 kic.go:194] Starting extracting preloaded images to volume ...
	I0108 20:29:35.848724  103895 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17907-11003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-209824-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 20:29:41.303703  103895 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17907-11003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-209824-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (5.454933091s)
	I0108 20:29:41.303735  103895 kic.go:203] duration metric: took 5.455104 seconds to extract preloaded images to volume
	W0108 20:29:41.303873  103895 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 20:29:41.303991  103895 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 20:29:41.362303  103895 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-209824-m02 --name multinode-209824-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-209824-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-209824-m02 --network multinode-209824 --ip 192.168.58.3 --volume multinode-209824-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0108 20:29:41.709469  103895 cli_runner.go:164] Run: docker container inspect multinode-209824-m02 --format={{.State.Running}}
	I0108 20:29:41.728684  103895 cli_runner.go:164] Run: docker container inspect multinode-209824-m02 --format={{.State.Status}}
	I0108 20:29:41.753660  103895 cli_runner.go:164] Run: docker exec multinode-209824-m02 stat /var/lib/dpkg/alternatives/iptables
	I0108 20:29:41.828143  103895 oci.go:144] the created container "multinode-209824-m02" has a running status.
	I0108 20:29:41.828175  103895 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17907-11003/.minikube/machines/multinode-209824-m02/id_rsa...
	I0108 20:29:42.005732  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/machines/multinode-209824-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0108 20:29:42.005796  103895 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17907-11003/.minikube/machines/multinode-209824-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 20:29:42.034768  103895 cli_runner.go:164] Run: docker container inspect multinode-209824-m02 --format={{.State.Status}}
	I0108 20:29:42.058442  103895 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 20:29:42.058472  103895 kic_runner.go:114] Args: [docker exec --privileged multinode-209824-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 20:29:42.125823  103895 cli_runner.go:164] Run: docker container inspect multinode-209824-m02 --format={{.State.Status}}
	I0108 20:29:42.147767  103895 machine.go:88] provisioning docker machine ...
	I0108 20:29:42.147819  103895 ubuntu.go:169] provisioning hostname "multinode-209824-m02"
	I0108 20:29:42.147898  103895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-209824-m02
	I0108 20:29:42.171530  103895 main.go:141] libmachine: Using SSH client type: native
	I0108 20:29:42.171866  103895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0108 20:29:42.171877  103895 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-209824-m02 && echo "multinode-209824-m02" | sudo tee /etc/hostname
	I0108 20:29:42.172756  103895 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40080->127.0.0.1:32852: read: connection reset by peer
	I0108 20:29:45.312440  103895 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-209824-m02
	
	I0108 20:29:45.312557  103895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-209824-m02
	I0108 20:29:45.332754  103895 main.go:141] libmachine: Using SSH client type: native
	I0108 20:29:45.333076  103895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0108 20:29:45.333095  103895 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-209824-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-209824-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-209824-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:29:45.456314  103895 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:29:45.456356  103895 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17907-11003/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-11003/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-11003/.minikube}
	I0108 20:29:45.456376  103895 ubuntu.go:177] setting up certificates
	I0108 20:29:45.456388  103895 provision.go:83] configureAuth start
	I0108 20:29:45.456456  103895 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-209824-m02
	I0108 20:29:45.475624  103895 provision.go:138] copyHostCerts
	I0108 20:29:45.475678  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17907-11003/.minikube/ca.pem
	I0108 20:29:45.475726  103895 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-11003/.minikube/ca.pem, removing ...
	I0108 20:29:45.475737  103895 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-11003/.minikube/ca.pem
	I0108 20:29:45.475834  103895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-11003/.minikube/ca.pem (1078 bytes)
	I0108 20:29:45.475920  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17907-11003/.minikube/cert.pem
	I0108 20:29:45.475940  103895 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-11003/.minikube/cert.pem, removing ...
	I0108 20:29:45.475948  103895 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-11003/.minikube/cert.pem
	I0108 20:29:45.475979  103895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-11003/.minikube/cert.pem (1123 bytes)
	I0108 20:29:45.476069  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17907-11003/.minikube/key.pem
	I0108 20:29:45.476094  103895 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-11003/.minikube/key.pem, removing ...
	I0108 20:29:45.476098  103895 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-11003/.minikube/key.pem
	I0108 20:29:45.476121  103895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-11003/.minikube/key.pem (1679 bytes)
	I0108 20:29:45.476177  103895 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-11003/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca-key.pem org=jenkins.multinode-209824-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-209824-m02]
	I0108 20:29:45.671532  103895 provision.go:172] copyRemoteCerts
	I0108 20:29:45.671666  103895 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:29:45.671723  103895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-209824-m02
	I0108 20:29:45.691958  103895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/multinode-209824-m02/id_rsa Username:docker}
	I0108 20:29:45.785879  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 20:29:45.785956  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 20:29:45.814168  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 20:29:45.814236  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0108 20:29:45.839693  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 20:29:45.839773  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 20:29:45.866594  103895 provision.go:86] duration metric: configureAuth took 410.186552ms
	I0108 20:29:45.866629  103895 ubuntu.go:193] setting minikube options for container-runtime
	I0108 20:29:45.866893  103895 config.go:182] Loaded profile config "multinode-209824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:29:45.867086  103895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-209824-m02
	I0108 20:29:45.886188  103895 main.go:141] libmachine: Using SSH client type: native
	I0108 20:29:45.886581  103895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0108 20:29:45.886602  103895 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 20:29:46.113491  103895 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 20:29:46.113517  103895 machine.go:91] provisioned docker machine in 3.965721233s
	I0108 20:29:46.113528  103895 client.go:171] LocalClient.Create took 10.892509404s
	I0108 20:29:46.113553  103895 start.go:167] duration metric: libmachine.API.Create for "multinode-209824" took 10.892567441s
	I0108 20:29:46.113562  103895 start.go:300] post-start starting for "multinode-209824-m02" (driver="docker")
	I0108 20:29:46.113571  103895 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:29:46.113618  103895 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:29:46.113667  103895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-209824-m02
	I0108 20:29:46.132332  103895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/multinode-209824-m02/id_rsa Username:docker}
	I0108 20:29:46.226317  103895 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:29:46.229850  103895 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0108 20:29:46.229883  103895 command_runner.go:130] > NAME="Ubuntu"
	I0108 20:29:46.229893  103895 command_runner.go:130] > VERSION_ID="22.04"
	I0108 20:29:46.229900  103895 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0108 20:29:46.229915  103895 command_runner.go:130] > VERSION_CODENAME=jammy
	I0108 20:29:46.229919  103895 command_runner.go:130] > ID=ubuntu
	I0108 20:29:46.229923  103895 command_runner.go:130] > ID_LIKE=debian
	I0108 20:29:46.229928  103895 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0108 20:29:46.229941  103895 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0108 20:29:46.229953  103895 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0108 20:29:46.229972  103895 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0108 20:29:46.229979  103895 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0108 20:29:46.230052  103895 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 20:29:46.230080  103895 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 20:29:46.230091  103895 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 20:29:46.230101  103895 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 20:29:46.230119  103895 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-11003/.minikube/addons for local assets ...
	I0108 20:29:46.230203  103895 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-11003/.minikube/files for local assets ...
	I0108 20:29:46.230293  103895 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/ssl/certs/177612.pem -> 177612.pem in /etc/ssl/certs
	I0108 20:29:46.230303  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/ssl/certs/177612.pem -> /etc/ssl/certs/177612.pem
	I0108 20:29:46.230446  103895 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 20:29:46.239815  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/ssl/certs/177612.pem --> /etc/ssl/certs/177612.pem (1708 bytes)
	I0108 20:29:46.264646  103895 start.go:303] post-start completed in 151.066043ms
	I0108 20:29:46.264976  103895 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-209824-m02
	I0108 20:29:46.282914  103895 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/config.json ...
	I0108 20:29:46.283196  103895 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:29:46.283262  103895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-209824-m02
	I0108 20:29:46.307172  103895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/multinode-209824-m02/id_rsa Username:docker}
	I0108 20:29:46.400895  103895 command_runner.go:130] > 20%!
	(MISSING)I0108 20:29:46.400993  103895 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 20:29:46.405839  103895 command_runner.go:130] > 233G
	I0108 20:29:46.406014  103895 start.go:128] duration metric: createHost completed in 11.18786121s
	I0108 20:29:46.406035  103895 start.go:83] releasing machines lock for "multinode-209824-m02", held for 11.187994405s
	I0108 20:29:46.406094  103895 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-209824-m02
	I0108 20:29:46.427219  103895 out.go:177] * Found network options:
	I0108 20:29:46.429194  103895 out.go:177]   - NO_PROXY=192.168.58.2
	W0108 20:29:46.430642  103895 proxy.go:119] fail to check proxy env: Error ip not in block
	W0108 20:29:46.430683  103895 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 20:29:46.430788  103895 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 20:29:46.430850  103895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-209824-m02
	I0108 20:29:46.430889  103895 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:29:46.430990  103895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-209824-m02
	I0108 20:29:46.452885  103895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/multinode-209824-m02/id_rsa Username:docker}
	I0108 20:29:46.452878  103895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/multinode-209824-m02/id_rsa Username:docker}
	I0108 20:29:46.678202  103895 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 20:29:46.678230  103895 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 20:29:46.683117  103895 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0108 20:29:46.683154  103895 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0108 20:29:46.683165  103895 command_runner.go:130] > Device: b0h/176d	Inode: 570039      Links: 1
	I0108 20:29:46.683173  103895 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 20:29:46.683184  103895 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0108 20:29:46.683192  103895 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0108 20:29:46.683199  103895 command_runner.go:130] > Change: 2024-01-08 20:09:52.936393328 +0000
	I0108 20:29:46.683207  103895 command_runner.go:130] >  Birth: 2024-01-08 20:09:52.936393328 +0000
	I0108 20:29:46.683295  103895 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:29:46.704039  103895 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 20:29:46.704154  103895 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:29:46.736614  103895 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0108 20:29:46.736657  103895 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0108 20:29:46.736666  103895 start.go:475] detecting cgroup driver to use...
	I0108 20:29:46.736703  103895 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 20:29:46.736757  103895 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 20:29:46.753037  103895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 20:29:46.765979  103895 docker.go:217] disabling cri-docker service (if available) ...
	I0108 20:29:46.766035  103895 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 20:29:46.779027  103895 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 20:29:46.794409  103895 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 20:29:46.879065  103895 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 20:29:46.970727  103895 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0108 20:29:46.970776  103895 docker.go:233] disabling docker service ...
	I0108 20:29:46.970834  103895 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:29:46.989515  103895 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:29:47.001457  103895 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:29:47.089339  103895 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0108 20:29:47.089410  103895 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:29:47.102761  103895 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0108 20:29:47.179425  103895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:29:47.191428  103895 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:29:47.208158  103895 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0108 20:29:47.209156  103895 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 20:29:47.209210  103895 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:29:47.219386  103895 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 20:29:47.219462  103895 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:29:47.230566  103895 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:29:47.240326  103895 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:29:47.250268  103895 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 20:29:47.259806  103895 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 20:29:47.267258  103895 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0108 20:29:47.268267  103895 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 20:29:47.276629  103895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:29:47.357257  103895 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 20:29:47.483426  103895 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 20:29:47.483502  103895 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 20:29:47.487463  103895 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0108 20:29:47.487495  103895 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 20:29:47.487506  103895 command_runner.go:130] > Device: b9h/185d	Inode: 190         Links: 1
	I0108 20:29:47.487517  103895 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 20:29:47.487526  103895 command_runner.go:130] > Access: 2024-01-08 20:29:47.470099957 +0000
	I0108 20:29:47.487536  103895 command_runner.go:130] > Modify: 2024-01-08 20:29:47.470099957 +0000
	I0108 20:29:47.487548  103895 command_runner.go:130] > Change: 2024-01-08 20:29:47.470099957 +0000
	I0108 20:29:47.487557  103895 command_runner.go:130] >  Birth: -
	I0108 20:29:47.487579  103895 start.go:543] Will wait 60s for crictl version
	I0108 20:29:47.487621  103895 ssh_runner.go:195] Run: which crictl
	I0108 20:29:47.490629  103895 command_runner.go:130] > /usr/bin/crictl
	I0108 20:29:47.490847  103895 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 20:29:47.529276  103895 command_runner.go:130] > Version:  0.1.0
	I0108 20:29:47.529297  103895 command_runner.go:130] > RuntimeName:  cri-o
	I0108 20:29:47.529302  103895 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0108 20:29:47.529307  103895 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 20:29:47.529323  103895 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0108 20:29:47.529397  103895 ssh_runner.go:195] Run: crio --version
	I0108 20:29:47.566107  103895 command_runner.go:130] > crio version 1.24.6
	I0108 20:29:47.566127  103895 command_runner.go:130] > Version:          1.24.6
	I0108 20:29:47.566133  103895 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0108 20:29:47.566138  103895 command_runner.go:130] > GitTreeState:     clean
	I0108 20:29:47.566144  103895 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0108 20:29:47.566148  103895 command_runner.go:130] > GoVersion:        go1.18.2
	I0108 20:29:47.566152  103895 command_runner.go:130] > Compiler:         gc
	I0108 20:29:47.566156  103895 command_runner.go:130] > Platform:         linux/amd64
	I0108 20:29:47.566164  103895 command_runner.go:130] > Linkmode:         dynamic
	I0108 20:29:47.566172  103895 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 20:29:47.566177  103895 command_runner.go:130] > SeccompEnabled:   true
	I0108 20:29:47.566181  103895 command_runner.go:130] > AppArmorEnabled:  false
	I0108 20:29:47.566253  103895 ssh_runner.go:195] Run: crio --version
	I0108 20:29:47.603984  103895 command_runner.go:130] > crio version 1.24.6
	I0108 20:29:47.604011  103895 command_runner.go:130] > Version:          1.24.6
	I0108 20:29:47.604022  103895 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0108 20:29:47.604028  103895 command_runner.go:130] > GitTreeState:     clean
	I0108 20:29:47.604039  103895 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0108 20:29:47.604045  103895 command_runner.go:130] > GoVersion:        go1.18.2
	I0108 20:29:47.604058  103895 command_runner.go:130] > Compiler:         gc
	I0108 20:29:47.604065  103895 command_runner.go:130] > Platform:         linux/amd64
	I0108 20:29:47.604074  103895 command_runner.go:130] > Linkmode:         dynamic
	I0108 20:29:47.604089  103895 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 20:29:47.604098  103895 command_runner.go:130] > SeccompEnabled:   true
	I0108 20:29:47.604107  103895 command_runner.go:130] > AppArmorEnabled:  false
	I0108 20:29:47.606745  103895 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0108 20:29:47.608635  103895 out.go:177]   - env NO_PROXY=192.168.58.2
	I0108 20:29:47.610283  103895 cli_runner.go:164] Run: docker network inspect multinode-209824 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 20:29:47.628237  103895 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0108 20:29:47.633666  103895 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:29:47.647348  103895 certs.go:56] Setting up /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824 for IP: 192.168.58.3
	I0108 20:29:47.647438  103895 certs.go:190] acquiring lock for shared ca certs: {Name:mk77871b3b3f5891ac4ba9a63281bc46e0e62e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:29:47.647664  103895 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17907-11003/.minikube/ca.key
	I0108 20:29:47.647727  103895 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17907-11003/.minikube/proxy-client-ca.key
	I0108 20:29:47.647745  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 20:29:47.647767  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 20:29:47.647787  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 20:29:47.647805  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 20:29:47.647877  103895 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/home/jenkins/minikube-integration/17907-11003/.minikube/certs/17761.pem (1338 bytes)
	W0108 20:29:47.647926  103895 certs.go:433] ignoring /home/jenkins/minikube-integration/17907-11003/.minikube/certs/home/jenkins/minikube-integration/17907-11003/.minikube/certs/17761_empty.pem, impossibly tiny 0 bytes
	I0108 20:29:47.647951  103895 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 20:29:47.648006  103895 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem (1078 bytes)
	I0108 20:29:47.648047  103895 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/home/jenkins/minikube-integration/17907-11003/.minikube/certs/cert.pem (1123 bytes)
	I0108 20:29:47.648108  103895 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/home/jenkins/minikube-integration/17907-11003/.minikube/certs/key.pem (1679 bytes)
	I0108 20:29:47.648178  103895 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/ssl/certs/177612.pem (1708 bytes)
	I0108 20:29:47.648223  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:29:47.648245  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/17761.pem -> /usr/share/ca-certificates/17761.pem
	I0108 20:29:47.648265  103895 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/ssl/certs/177612.pem -> /usr/share/ca-certificates/177612.pem
	I0108 20:29:47.648626  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 20:29:47.675217  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 20:29:47.700996  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 20:29:47.728177  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 20:29:47.754226  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 20:29:47.778979  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/certs/17761.pem --> /usr/share/ca-certificates/17761.pem (1338 bytes)
	I0108 20:29:47.804375  103895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/ssl/certs/177612.pem --> /usr/share/ca-certificates/177612.pem (1708 bytes)
	I0108 20:29:47.829667  103895 ssh_runner.go:195] Run: openssl version
	I0108 20:29:47.835894  103895 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0108 20:29:47.836049  103895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17761.pem && ln -fs /usr/share/ca-certificates/17761.pem /etc/ssl/certs/17761.pem"
	I0108 20:29:47.847384  103895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17761.pem
	I0108 20:29:47.851306  103895 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 20:16 /usr/share/ca-certificates/17761.pem
	I0108 20:29:47.851341  103895 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:16 /usr/share/ca-certificates/17761.pem
	I0108 20:29:47.851409  103895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17761.pem
	I0108 20:29:47.858070  103895 command_runner.go:130] > 51391683
	I0108 20:29:47.858325  103895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17761.pem /etc/ssl/certs/51391683.0"
	I0108 20:29:47.867972  103895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177612.pem && ln -fs /usr/share/ca-certificates/177612.pem /etc/ssl/certs/177612.pem"
	I0108 20:29:47.877833  103895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177612.pem
	I0108 20:29:47.881984  103895 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 20:16 /usr/share/ca-certificates/177612.pem
	I0108 20:29:47.882072  103895 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:16 /usr/share/ca-certificates/177612.pem
	I0108 20:29:47.882141  103895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177612.pem
	I0108 20:29:47.889448  103895 command_runner.go:130] > 3ec20f2e
	I0108 20:29:47.889725  103895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177612.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 20:29:47.900734  103895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 20:29:47.910700  103895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:29:47.914184  103895 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:29:47.914274  103895 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:29:47.914346  103895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:29:47.921557  103895 command_runner.go:130] > b5213941
	I0108 20:29:47.921674  103895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 20:29:47.931873  103895 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 20:29:47.935534  103895 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 20:29:47.935600  103895 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 20:29:47.935702  103895 ssh_runner.go:195] Run: crio config
	I0108 20:29:47.978040  103895 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0108 20:29:47.978073  103895 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0108 20:29:47.978083  103895 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0108 20:29:47.978088  103895 command_runner.go:130] > #
	I0108 20:29:47.978101  103895 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0108 20:29:47.978111  103895 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0108 20:29:47.978120  103895 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0108 20:29:47.978132  103895 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0108 20:29:47.978144  103895 command_runner.go:130] > # reload'.
	I0108 20:29:47.978153  103895 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0108 20:29:47.978168  103895 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0108 20:29:47.978179  103895 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0108 20:29:47.978189  103895 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0108 20:29:47.978193  103895 command_runner.go:130] > [crio]
	I0108 20:29:47.978201  103895 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0108 20:29:47.978208  103895 command_runner.go:130] > # containers images, in this directory.
	I0108 20:29:47.978226  103895 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0108 20:29:47.978238  103895 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0108 20:29:47.978256  103895 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0108 20:29:47.978270  103895 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0108 20:29:47.978282  103895 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0108 20:29:47.978289  103895 command_runner.go:130] > # storage_driver = "vfs"
	I0108 20:29:47.978295  103895 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0108 20:29:47.978308  103895 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0108 20:29:47.978317  103895 command_runner.go:130] > # storage_option = [
	I0108 20:29:47.978323  103895 command_runner.go:130] > # ]
	I0108 20:29:47.978334  103895 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0108 20:29:47.978346  103895 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0108 20:29:47.978357  103895 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0108 20:29:47.978370  103895 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0108 20:29:47.978383  103895 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0108 20:29:47.978394  103895 command_runner.go:130] > # always happen on a node reboot
	I0108 20:29:47.978405  103895 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0108 20:29:47.978416  103895 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0108 20:29:47.978425  103895 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0108 20:29:47.978442  103895 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0108 20:29:47.978455  103895 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0108 20:29:47.978468  103895 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0108 20:29:47.978480  103895 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0108 20:29:47.978507  103895 command_runner.go:130] > # internal_wipe = true
	I0108 20:29:47.978518  103895 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0108 20:29:47.978528  103895 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0108 20:29:47.978538  103895 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0108 20:29:47.978551  103895 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0108 20:29:47.978561  103895 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0108 20:29:47.978571  103895 command_runner.go:130] > [crio.api]
	I0108 20:29:47.978580  103895 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0108 20:29:47.978594  103895 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0108 20:29:47.978605  103895 command_runner.go:130] > # IP address on which the stream server will listen.
	I0108 20:29:47.978616  103895 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0108 20:29:47.978629  103895 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0108 20:29:47.978637  103895 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0108 20:29:47.978646  103895 command_runner.go:130] > # stream_port = "0"
	I0108 20:29:47.978659  103895 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0108 20:29:47.978670  103895 command_runner.go:130] > # stream_enable_tls = false
	I0108 20:29:47.978686  103895 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0108 20:29:47.978696  103895 command_runner.go:130] > # stream_idle_timeout = ""
	I0108 20:29:47.978706  103895 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0108 20:29:47.978716  103895 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0108 20:29:47.978723  103895 command_runner.go:130] > # minutes.
	I0108 20:29:47.978730  103895 command_runner.go:130] > # stream_tls_cert = ""
	I0108 20:29:47.978741  103895 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0108 20:29:47.978755  103895 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0108 20:29:47.978763  103895 command_runner.go:130] > # stream_tls_key = ""
	I0108 20:29:47.978777  103895 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0108 20:29:47.978825  103895 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0108 20:29:47.978844  103895 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0108 20:29:47.978852  103895 command_runner.go:130] > # stream_tls_ca = ""
	I0108 20:29:47.978868  103895 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 20:29:47.978879  103895 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0108 20:29:47.978893  103895 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 20:29:47.978903  103895 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0108 20:29:47.978933  103895 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0108 20:29:47.978946  103895 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0108 20:29:47.978953  103895 command_runner.go:130] > [crio.runtime]
	I0108 20:29:47.978962  103895 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0108 20:29:47.978976  103895 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0108 20:29:47.978985  103895 command_runner.go:130] > # "nofile=1024:2048"
	I0108 20:29:47.978996  103895 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0108 20:29:47.979006  103895 command_runner.go:130] > # default_ulimits = [
	I0108 20:29:47.979013  103895 command_runner.go:130] > # ]
	I0108 20:29:47.979023  103895 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0108 20:29:47.979033  103895 command_runner.go:130] > # no_pivot = false
	I0108 20:29:47.979043  103895 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0108 20:29:47.979056  103895 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0108 20:29:47.979065  103895 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0108 20:29:47.979083  103895 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0108 20:29:47.979094  103895 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0108 20:29:47.979107  103895 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 20:29:47.979116  103895 command_runner.go:130] > # conmon = ""
	I0108 20:29:47.979124  103895 command_runner.go:130] > # Cgroup setting for conmon
	I0108 20:29:47.979137  103895 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0108 20:29:47.979144  103895 command_runner.go:130] > conmon_cgroup = "pod"
	I0108 20:29:47.979159  103895 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0108 20:29:47.979169  103895 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0108 20:29:47.979183  103895 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 20:29:47.979194  103895 command_runner.go:130] > # conmon_env = [
	I0108 20:29:47.979200  103895 command_runner.go:130] > # ]
	I0108 20:29:47.979211  103895 command_runner.go:130] > # Additional environment variables to set for all the
	I0108 20:29:47.979220  103895 command_runner.go:130] > # containers. These are overridden if set in the
	I0108 20:29:47.979230  103895 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0108 20:29:47.979238  103895 command_runner.go:130] > # default_env = [
	I0108 20:29:47.979247  103895 command_runner.go:130] > # ]
	I0108 20:29:47.979258  103895 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0108 20:29:47.979267  103895 command_runner.go:130] > # selinux = false
	I0108 20:29:47.979278  103895 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0108 20:29:47.979290  103895 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0108 20:29:47.979304  103895 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0108 20:29:47.979320  103895 command_runner.go:130] > # seccomp_profile = ""
	I0108 20:29:47.979330  103895 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0108 20:29:47.979338  103895 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0108 20:29:47.979351  103895 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0108 20:29:47.979378  103895 command_runner.go:130] > # which might increase security.
	I0108 20:29:47.979386  103895 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0108 20:29:47.979399  103895 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0108 20:29:47.979408  103895 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0108 20:29:47.979422  103895 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0108 20:29:47.979430  103895 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0108 20:29:47.979436  103895 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:29:47.979442  103895 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0108 20:29:47.979449  103895 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0108 20:29:47.979456  103895 command_runner.go:130] > # the cgroup blockio controller.
	I0108 20:29:47.979461  103895 command_runner.go:130] > # blockio_config_file = ""
	I0108 20:29:47.979469  103895 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0108 20:29:47.979474  103895 command_runner.go:130] > # irqbalance daemon.
	I0108 20:29:47.979487  103895 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0108 20:29:47.979497  103895 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0108 20:29:47.979506  103895 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:29:47.979511  103895 command_runner.go:130] > # rdt_config_file = ""
	I0108 20:29:47.979520  103895 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0108 20:29:47.979536  103895 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0108 20:29:47.979546  103895 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0108 20:29:47.979557  103895 command_runner.go:130] > # separate_pull_cgroup = ""
	I0108 20:29:47.979567  103895 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0108 20:29:47.979581  103895 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0108 20:29:47.979589  103895 command_runner.go:130] > # will be added.
	I0108 20:29:47.979599  103895 command_runner.go:130] > # default_capabilities = [
	I0108 20:29:47.979606  103895 command_runner.go:130] > # 	"CHOWN",
	I0108 20:29:47.979623  103895 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0108 20:29:47.979629  103895 command_runner.go:130] > # 	"FSETID",
	I0108 20:29:47.979634  103895 command_runner.go:130] > # 	"FOWNER",
	I0108 20:29:47.979640  103895 command_runner.go:130] > # 	"SETGID",
	I0108 20:29:47.979644  103895 command_runner.go:130] > # 	"SETUID",
	I0108 20:29:47.979650  103895 command_runner.go:130] > # 	"SETPCAP",
	I0108 20:29:47.979655  103895 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0108 20:29:47.979661  103895 command_runner.go:130] > # 	"KILL",
	I0108 20:29:47.979665  103895 command_runner.go:130] > # ]
	I0108 20:29:47.979675  103895 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0108 20:29:47.979686  103895 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0108 20:29:47.979693  103895 command_runner.go:130] > # add_inheritable_capabilities = true
	I0108 20:29:47.979700  103895 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0108 20:29:47.979709  103895 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 20:29:47.979714  103895 command_runner.go:130] > # default_sysctls = [
	I0108 20:29:47.979717  103895 command_runner.go:130] > # ]
	I0108 20:29:47.979724  103895 command_runner.go:130] > # List of devices on the host that a
	I0108 20:29:47.979733  103895 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0108 20:29:47.979738  103895 command_runner.go:130] > # allowed_devices = [
	I0108 20:29:47.979745  103895 command_runner.go:130] > # 	"/dev/fuse",
	I0108 20:29:47.979748  103895 command_runner.go:130] > # ]
	I0108 20:29:47.979756  103895 command_runner.go:130] > # List of additional devices. specified as
	I0108 20:29:47.979779  103895 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0108 20:29:47.979788  103895 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0108 20:29:47.979794  103895 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 20:29:47.979801  103895 command_runner.go:130] > # additional_devices = [
	I0108 20:29:47.979805  103895 command_runner.go:130] > # ]
	I0108 20:29:47.979815  103895 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0108 20:29:47.979822  103895 command_runner.go:130] > # cdi_spec_dirs = [
	I0108 20:29:47.979827  103895 command_runner.go:130] > # 	"/etc/cdi",
	I0108 20:29:47.979833  103895 command_runner.go:130] > # 	"/var/run/cdi",
	I0108 20:29:47.979837  103895 command_runner.go:130] > # ]
	I0108 20:29:47.979846  103895 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0108 20:29:47.979854  103895 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0108 20:29:47.979861  103895 command_runner.go:130] > # Defaults to false.
	I0108 20:29:47.979866  103895 command_runner.go:130] > # device_ownership_from_security_context = false
	I0108 20:29:47.979875  103895 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0108 20:29:47.979881  103895 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0108 20:29:47.979887  103895 command_runner.go:130] > # hooks_dir = [
	I0108 20:29:47.979892  103895 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0108 20:29:47.979898  103895 command_runner.go:130] > # ]
	I0108 20:29:47.979905  103895 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0108 20:29:47.979913  103895 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0108 20:29:47.979921  103895 command_runner.go:130] > # its default mounts from the following two files:
	I0108 20:29:47.979924  103895 command_runner.go:130] > #
	I0108 20:29:47.979932  103895 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0108 20:29:47.979941  103895 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0108 20:29:47.979949  103895 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0108 20:29:47.979953  103895 command_runner.go:130] > #
	I0108 20:29:47.979960  103895 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0108 20:29:47.979968  103895 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0108 20:29:47.979977  103895 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0108 20:29:47.979985  103895 command_runner.go:130] > #      only add mounts it finds in this file.
	I0108 20:29:47.979989  103895 command_runner.go:130] > #
	I0108 20:29:47.979995  103895 command_runner.go:130] > # default_mounts_file = ""
	I0108 20:29:47.980001  103895 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0108 20:29:47.980010  103895 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0108 20:29:47.980016  103895 command_runner.go:130] > # pids_limit = 0
	I0108 20:29:47.980023  103895 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0108 20:29:47.980031  103895 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0108 20:29:47.980039  103895 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0108 20:29:47.980049  103895 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0108 20:29:47.980055  103895 command_runner.go:130] > # log_size_max = -1
	I0108 20:29:47.980062  103895 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0108 20:29:47.980069  103895 command_runner.go:130] > # log_to_journald = false
	I0108 20:29:47.980075  103895 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0108 20:29:47.980083  103895 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0108 20:29:47.980088  103895 command_runner.go:130] > # Path to directory for container attach sockets.
	I0108 20:29:47.980095  103895 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0108 20:29:47.980100  103895 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0108 20:29:47.980107  103895 command_runner.go:130] > # bind_mount_prefix = ""
	I0108 20:29:47.980112  103895 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0108 20:29:47.980119  103895 command_runner.go:130] > # read_only = false
	I0108 20:29:47.980125  103895 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0108 20:29:47.980133  103895 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0108 20:29:47.980140  103895 command_runner.go:130] > # live configuration reload.
	I0108 20:29:47.980144  103895 command_runner.go:130] > # log_level = "info"
	I0108 20:29:47.980152  103895 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0108 20:29:47.980160  103895 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:29:47.980166  103895 command_runner.go:130] > # log_filter = ""
	I0108 20:29:47.980173  103895 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0108 20:29:47.980181  103895 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0108 20:29:47.980188  103895 command_runner.go:130] > # separated by comma.
	I0108 20:29:47.980192  103895 command_runner.go:130] > # uid_mappings = ""
	I0108 20:29:47.980198  103895 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0108 20:29:47.980206  103895 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0108 20:29:47.980212  103895 command_runner.go:130] > # separated by comma.
	I0108 20:29:47.980216  103895 command_runner.go:130] > # gid_mappings = ""
	I0108 20:29:47.980224  103895 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0108 20:29:47.980233  103895 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 20:29:47.980241  103895 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 20:29:47.980248  103895 command_runner.go:130] > # minimum_mappable_uid = -1
	I0108 20:29:47.980254  103895 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0108 20:29:47.980262  103895 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 20:29:47.980270  103895 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 20:29:47.980277  103895 command_runner.go:130] > # minimum_mappable_gid = -1
	I0108 20:29:47.980283  103895 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0108 20:29:47.980291  103895 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0108 20:29:47.980299  103895 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0108 20:29:47.980305  103895 command_runner.go:130] > # ctr_stop_timeout = 30
	I0108 20:29:47.980311  103895 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0108 20:29:47.980322  103895 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0108 20:29:47.980329  103895 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0108 20:29:47.980336  103895 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0108 20:29:47.980341  103895 command_runner.go:130] > # drop_infra_ctr = true
	I0108 20:29:47.980349  103895 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0108 20:29:47.980355  103895 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0108 20:29:47.980365  103895 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0108 20:29:47.980371  103895 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0108 20:29:47.980377  103895 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0108 20:29:47.980384  103895 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0108 20:29:47.980391  103895 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0108 20:29:47.980400  103895 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0108 20:29:47.980406  103895 command_runner.go:130] > # pinns_path = ""
	I0108 20:29:47.980413  103895 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0108 20:29:47.980421  103895 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0108 20:29:47.980430  103895 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0108 20:29:47.980436  103895 command_runner.go:130] > # default_runtime = "runc"
	I0108 20:29:47.980442  103895 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0108 20:29:47.980452  103895 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0108 20:29:47.980464  103895 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0108 20:29:47.980472  103895 command_runner.go:130] > # creation as a file is not desired either.
	I0108 20:29:47.980487  103895 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0108 20:29:47.980495  103895 command_runner.go:130] > # the hostname is being managed dynamically.
	I0108 20:29:47.980501  103895 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0108 20:29:47.980507  103895 command_runner.go:130] > # ]
	I0108 20:29:47.980514  103895 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0108 20:29:47.980521  103895 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0108 20:29:47.980531  103895 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0108 20:29:47.980539  103895 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0108 20:29:47.980543  103895 command_runner.go:130] > #
	I0108 20:29:47.980550  103895 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0108 20:29:47.980556  103895 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0108 20:29:47.980562  103895 command_runner.go:130] > #  runtime_type = "oci"
	I0108 20:29:47.980567  103895 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0108 20:29:47.980574  103895 command_runner.go:130] > #  privileged_without_host_devices = false
	I0108 20:29:47.980579  103895 command_runner.go:130] > #  allowed_annotations = []
	I0108 20:29:47.980587  103895 command_runner.go:130] > # Where:
	I0108 20:29:47.980596  103895 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0108 20:29:47.980606  103895 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0108 20:29:47.980619  103895 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0108 20:29:47.980631  103895 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0108 20:29:47.980639  103895 command_runner.go:130] > #   in $PATH.
	I0108 20:29:47.980651  103895 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0108 20:29:47.980661  103895 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0108 20:29:47.980673  103895 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0108 20:29:47.980682  103895 command_runner.go:130] > #   state.
	I0108 20:29:47.980694  103895 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0108 20:29:47.980703  103895 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0108 20:29:47.980709  103895 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0108 20:29:47.980717  103895 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0108 20:29:47.980726  103895 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0108 20:29:47.980735  103895 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0108 20:29:47.980743  103895 command_runner.go:130] > #   The currently recognized values are:
	I0108 20:29:47.980763  103895 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0108 20:29:47.980773  103895 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0108 20:29:47.980782  103895 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0108 20:29:47.980791  103895 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0108 20:29:47.980798  103895 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0108 20:29:47.980807  103895 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0108 20:29:47.980816  103895 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0108 20:29:47.980826  103895 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0108 20:29:47.980833  103895 command_runner.go:130] > #   should be moved to the container's cgroup
	I0108 20:29:47.980838  103895 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0108 20:29:47.980845  103895 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0108 20:29:47.980850  103895 command_runner.go:130] > runtime_type = "oci"
	I0108 20:29:47.980857  103895 command_runner.go:130] > runtime_root = "/run/runc"
	I0108 20:29:47.980862  103895 command_runner.go:130] > runtime_config_path = ""
	I0108 20:29:47.980869  103895 command_runner.go:130] > monitor_path = ""
	I0108 20:29:47.980874  103895 command_runner.go:130] > monitor_cgroup = ""
	I0108 20:29:47.980880  103895 command_runner.go:130] > monitor_exec_cgroup = ""
	I0108 20:29:47.980908  103895 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0108 20:29:47.980919  103895 command_runner.go:130] > # running containers
	I0108 20:29:47.980924  103895 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0108 20:29:47.980932  103895 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0108 20:29:47.980941  103895 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0108 20:29:47.980947  103895 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0108 20:29:47.980954  103895 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0108 20:29:47.980960  103895 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0108 20:29:47.980967  103895 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0108 20:29:47.980972  103895 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0108 20:29:47.980979  103895 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0108 20:29:47.980984  103895 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0108 20:29:47.980993  103895 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0108 20:29:47.980998  103895 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0108 20:29:47.981005  103895 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0108 20:29:47.981015  103895 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0108 20:29:47.981025  103895 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0108 20:29:47.981034  103895 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0108 20:29:47.981046  103895 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0108 20:29:47.981056  103895 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0108 20:29:47.981064  103895 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0108 20:29:47.981072  103895 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0108 20:29:47.981078  103895 command_runner.go:130] > # Example:
	I0108 20:29:47.981084  103895 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0108 20:29:47.981092  103895 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0108 20:29:47.981098  103895 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0108 20:29:47.981105  103895 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0108 20:29:47.981112  103895 command_runner.go:130] > # cpuset = 0
	I0108 20:29:47.981117  103895 command_runner.go:130] > # cpushares = "0-1"
	I0108 20:29:47.981123  103895 command_runner.go:130] > # Where:
	I0108 20:29:47.981127  103895 command_runner.go:130] > # The workload name is workload-type.
	I0108 20:29:47.981137  103895 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0108 20:29:47.981145  103895 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0108 20:29:47.981151  103895 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0108 20:29:47.981161  103895 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0108 20:29:47.981169  103895 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0108 20:29:47.981173  103895 command_runner.go:130] > # 
	I0108 20:29:47.981180  103895 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0108 20:29:47.981186  103895 command_runner.go:130] > #
	I0108 20:29:47.981193  103895 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0108 20:29:47.981201  103895 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0108 20:29:47.981209  103895 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0108 20:29:47.981216  103895 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0108 20:29:47.981224  103895 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0108 20:29:47.981230  103895 command_runner.go:130] > [crio.image]
	I0108 20:29:47.981237  103895 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0108 20:29:47.981243  103895 command_runner.go:130] > # default_transport = "docker://"
	I0108 20:29:47.981249  103895 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0108 20:29:47.981258  103895 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0108 20:29:47.981264  103895 command_runner.go:130] > # global_auth_file = ""
	I0108 20:29:47.981269  103895 command_runner.go:130] > # The image used to instantiate infra containers.
	I0108 20:29:47.981277  103895 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:29:47.981285  103895 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0108 20:29:47.981291  103895 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0108 20:29:47.981300  103895 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0108 20:29:47.981308  103895 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:29:47.981312  103895 command_runner.go:130] > # pause_image_auth_file = ""
	I0108 20:29:47.981322  103895 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0108 20:29:47.981330  103895 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0108 20:29:47.981339  103895 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0108 20:29:47.981347  103895 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0108 20:29:47.981353  103895 command_runner.go:130] > # pause_command = "/pause"
	I0108 20:29:47.981361  103895 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0108 20:29:47.981370  103895 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0108 20:29:47.981379  103895 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0108 20:29:47.981387  103895 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0108 20:29:47.981394  103895 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0108 20:29:47.981401  103895 command_runner.go:130] > # signature_policy = ""
	I0108 20:29:47.981411  103895 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0108 20:29:47.981419  103895 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0108 20:29:47.981426  103895 command_runner.go:130] > # changing them here.
	I0108 20:29:47.981430  103895 command_runner.go:130] > # insecure_registries = [
	I0108 20:29:47.981436  103895 command_runner.go:130] > # ]
	I0108 20:29:47.981442  103895 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0108 20:29:47.981450  103895 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0108 20:29:47.981454  103895 command_runner.go:130] > # image_volumes = "mkdir"
	I0108 20:29:47.981462  103895 command_runner.go:130] > # Temporary directory to use for storing big files
	I0108 20:29:47.981467  103895 command_runner.go:130] > # big_files_temporary_dir = ""
	I0108 20:29:47.981475  103895 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0108 20:29:47.981479  103895 command_runner.go:130] > # CNI plugins.
	I0108 20:29:47.981488  103895 command_runner.go:130] > [crio.network]
	I0108 20:29:47.981494  103895 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0108 20:29:47.981502  103895 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0108 20:29:47.981506  103895 command_runner.go:130] > # cni_default_network = ""
	I0108 20:29:47.981515  103895 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0108 20:29:47.981522  103895 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0108 20:29:47.981527  103895 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0108 20:29:47.981532  103895 command_runner.go:130] > # plugin_dirs = [
	I0108 20:29:47.981538  103895 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0108 20:29:47.981544  103895 command_runner.go:130] > # ]
	I0108 20:29:47.981556  103895 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0108 20:29:47.981572  103895 command_runner.go:130] > [crio.metrics]
	I0108 20:29:47.981582  103895 command_runner.go:130] > # Globally enable or disable metrics support.
	I0108 20:29:47.981588  103895 command_runner.go:130] > # enable_metrics = false
	I0108 20:29:47.981594  103895 command_runner.go:130] > # Specify enabled metrics collectors.
	I0108 20:29:47.981601  103895 command_runner.go:130] > # Per default all metrics are enabled.
	I0108 20:29:47.981608  103895 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0108 20:29:47.981617  103895 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0108 20:29:47.981623  103895 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0108 20:29:47.981630  103895 command_runner.go:130] > # metrics_collectors = [
	I0108 20:29:47.981634  103895 command_runner.go:130] > # 	"operations",
	I0108 20:29:47.981645  103895 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0108 20:29:47.981655  103895 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0108 20:29:47.981665  103895 command_runner.go:130] > # 	"operations_errors",
	I0108 20:29:47.981673  103895 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0108 20:29:47.981683  103895 command_runner.go:130] > # 	"image_pulls_by_name",
	I0108 20:29:47.981693  103895 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0108 20:29:47.981700  103895 command_runner.go:130] > # 	"image_pulls_failures",
	I0108 20:29:47.981705  103895 command_runner.go:130] > # 	"image_pulls_successes",
	I0108 20:29:47.981712  103895 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0108 20:29:47.981717  103895 command_runner.go:130] > # 	"image_layer_reuse",
	I0108 20:29:47.981724  103895 command_runner.go:130] > # 	"containers_oom_total",
	I0108 20:29:47.981728  103895 command_runner.go:130] > # 	"containers_oom",
	I0108 20:29:47.981735  103895 command_runner.go:130] > # 	"processes_defunct",
	I0108 20:29:47.981741  103895 command_runner.go:130] > # 	"operations_total",
	I0108 20:29:47.981752  103895 command_runner.go:130] > # 	"operations_latency_seconds",
	I0108 20:29:47.981764  103895 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0108 20:29:47.981772  103895 command_runner.go:130] > # 	"operations_errors_total",
	I0108 20:29:47.981784  103895 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0108 20:29:47.981795  103895 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0108 20:29:47.981805  103895 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0108 20:29:47.981813  103895 command_runner.go:130] > # 	"image_pulls_success_total",
	I0108 20:29:47.981818  103895 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0108 20:29:47.981825  103895 command_runner.go:130] > # 	"containers_oom_count_total",
	I0108 20:29:47.981828  103895 command_runner.go:130] > # ]
	I0108 20:29:47.981836  103895 command_runner.go:130] > # The port on which the metrics server will listen.
	I0108 20:29:47.981841  103895 command_runner.go:130] > # metrics_port = 9090
	I0108 20:29:47.981852  103895 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0108 20:29:47.981863  103895 command_runner.go:130] > # metrics_socket = ""
	I0108 20:29:47.981875  103895 command_runner.go:130] > # The certificate for the secure metrics server.
	I0108 20:29:47.981889  103895 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0108 20:29:47.981902  103895 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0108 20:29:47.981912  103895 command_runner.go:130] > # certificate on any modification event.
	I0108 20:29:47.981919  103895 command_runner.go:130] > # metrics_cert = ""
	I0108 20:29:47.981925  103895 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0108 20:29:47.981933  103895 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0108 20:29:47.981944  103895 command_runner.go:130] > # metrics_key = ""
	I0108 20:29:47.981958  103895 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0108 20:29:47.981968  103895 command_runner.go:130] > [crio.tracing]
	I0108 20:29:47.981981  103895 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0108 20:29:47.981991  103895 command_runner.go:130] > # enable_tracing = false
	I0108 20:29:47.982004  103895 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0108 20:29:47.982014  103895 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0108 20:29:47.982024  103895 command_runner.go:130] > # Number of samples to collect per million spans.
	I0108 20:29:47.982033  103895 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0108 20:29:47.982047  103895 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0108 20:29:47.982058  103895 command_runner.go:130] > [crio.stats]
	I0108 20:29:47.982069  103895 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0108 20:29:47.982082  103895 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0108 20:29:47.982093  103895 command_runner.go:130] > # stats_collection_period = 0
	I0108 20:29:47.982145  103895 command_runner.go:130] ! time="2024-01-08 20:29:47.975621649Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0108 20:29:47.982173  103895 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0108 20:29:47.982273  103895 cni.go:84] Creating CNI manager for ""
	I0108 20:29:47.982288  103895 cni.go:136] 2 nodes found, recommending kindnet
	I0108 20:29:47.982302  103895 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 20:29:47.982358  103895 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-209824 NodeName:multinode-209824-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 20:29:47.982558  103895 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-209824-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 20:29:47.982629  103895 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-209824-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-209824 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 20:29:47.982692  103895 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 20:29:47.990889  103895 command_runner.go:130] > kubeadm
	I0108 20:29:47.990910  103895 command_runner.go:130] > kubectl
	I0108 20:29:47.990914  103895 command_runner.go:130] > kubelet
	I0108 20:29:47.991581  103895 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 20:29:47.991637  103895 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0108 20:29:48.000750  103895 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0108 20:29:48.020730  103895 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 20:29:48.038736  103895 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0108 20:29:48.042762  103895 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:29:48.053413  103895 host.go:66] Checking if "multinode-209824" exists ...
	I0108 20:29:48.053654  103895 config.go:182] Loaded profile config "multinode-209824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:29:48.053657  103895 start.go:304] JoinCluster: &{Name:multinode-209824 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-209824 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:29:48.053745  103895 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0108 20:29:48.053788  103895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-209824
	I0108 20:29:48.073687  103895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/multinode-209824/id_rsa Username:docker}
	I0108 20:29:48.220120  103895 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 5jmond.hb39tamp8w2te1sf --discovery-token-ca-cert-hash sha256:5f0d3868e129d146f2f118c1d4d93dd4eee494642df3f8db5a7e17a4b1fd36d7 
	I0108 20:29:48.224769  103895 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 20:29:48.224829  103895 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5jmond.hb39tamp8w2te1sf --discovery-token-ca-cert-hash sha256:5f0d3868e129d146f2f118c1d4d93dd4eee494642df3f8db5a7e17a4b1fd36d7 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-209824-m02"
	I0108 20:29:48.261572  103895 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 20:29:48.292016  103895 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0108 20:29:48.292041  103895 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-gcp
	I0108 20:29:48.292047  103895 command_runner.go:130] > OS: Linux
	I0108 20:29:48.292052  103895 command_runner.go:130] > CGROUPS_CPU: enabled
	I0108 20:29:48.292058  103895 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0108 20:29:48.292064  103895 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0108 20:29:48.292069  103895 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0108 20:29:48.292074  103895 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0108 20:29:48.292079  103895 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0108 20:29:48.292095  103895 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0108 20:29:48.292106  103895 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0108 20:29:48.292117  103895 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0108 20:29:48.388883  103895 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 20:29:48.388939  103895 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 20:29:48.417262  103895 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 20:29:48.417297  103895 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 20:29:48.417306  103895 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 20:29:48.493593  103895 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0108 20:29:51.009255  103895 command_runner.go:130] > This node has joined the cluster:
	I0108 20:29:51.009290  103895 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0108 20:29:51.009307  103895 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0108 20:29:51.009314  103895 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0108 20:29:51.012596  103895 command_runner.go:130] ! W0108 20:29:48.260951    1112 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0108 20:29:51.012628  103895 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I0108 20:29:51.012645  103895 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 20:29:51.012668  103895 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5jmond.hb39tamp8w2te1sf --discovery-token-ca-cert-hash sha256:5f0d3868e129d146f2f118c1d4d93dd4eee494642df3f8db5a7e17a4b1fd36d7 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-209824-m02": (2.787821637s)
	I0108 20:29:51.012688  103895 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0108 20:29:51.106605  103895 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0108 20:29:51.185028  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28 minikube.k8s.io/name=multinode-209824 minikube.k8s.io/updated_at=2024_01_08T20_29_51_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:29:51.273983  103895 command_runner.go:130] > node/multinode-209824-m02 labeled
	I0108 20:29:51.277651  103895 start.go:306] JoinCluster complete in 3.223986615s
	I0108 20:29:51.277688  103895 cni.go:84] Creating CNI manager for ""
	I0108 20:29:51.277696  103895 cni.go:136] 2 nodes found, recommending kindnet
	I0108 20:29:51.277754  103895 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 20:29:51.281370  103895 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 20:29:51.281395  103895 command_runner.go:130] >   Size: 4085020   	Blocks: 7984       IO Block: 4096   regular file
	I0108 20:29:51.281409  103895 command_runner.go:130] > Device: 37h/55d	Inode: 573813      Links: 1
	I0108 20:29:51.281419  103895 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 20:29:51.281429  103895 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I0108 20:29:51.281445  103895 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I0108 20:29:51.281459  103895 command_runner.go:130] > Change: 2024-01-08 20:09:53.352435730 +0000
	I0108 20:29:51.281471  103895 command_runner.go:130] >  Birth: 2024-01-08 20:09:53.328433283 +0000
	I0108 20:29:51.281525  103895 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 20:29:51.281542  103895 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 20:29:51.299500  103895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 20:29:51.526089  103895 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0108 20:29:51.529708  103895 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0108 20:29:51.531991  103895 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0108 20:29:51.548624  103895 command_runner.go:130] > daemonset.apps/kindnet configured
	I0108 20:29:51.554794  103895 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17907-11003/kubeconfig
	I0108 20:29:51.555127  103895 kapi.go:59] client config for multinode-209824: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/client.key", CAFile:"/home/jenkins/minikube-integration/17907-11003/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:29:51.555654  103895 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 20:29:51.555679  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:51.555691  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:51.555700  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:51.559614  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:51.559648  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:51.559658  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:51 GMT
	I0108 20:29:51.559664  103895 round_trippers.go:580]     Audit-Id: 5ebbc203-c998-4649-b32d-2db55c87a9da
	I0108 20:29:51.559673  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:51.559683  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:51.559692  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:51.559701  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:51.559721  103895 round_trippers.go:580]     Content-Length: 291
	I0108 20:29:51.559764  103895 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f9e25afe-d819-409b-ac6a-2d6befc195f3","resourceVersion":"457","creationTimestamp":"2024-01-08T20:28:47Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 20:29:51.559921  103895 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-209824" context rescaled to 1 replicas
	I0108 20:29:51.559971  103895 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 20:29:51.562746  103895 out.go:177] * Verifying Kubernetes components...
	I0108 20:29:51.564366  103895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:29:51.578192  103895 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17907-11003/kubeconfig
	I0108 20:29:51.578410  103895 kapi.go:59] client config for multinode-209824: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-11003/.minikube/profiles/multinode-209824/client.key", CAFile:"/home/jenkins/minikube-integration/17907-11003/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:29:51.578669  103895 node_ready.go:35] waiting up to 6m0s for node "multinode-209824-m02" to be "Ready" ...
	I0108 20:29:51.578745  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:29:51.578752  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:51.578760  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:51.578768  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:51.580883  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:51.580909  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:51.580920  103895 round_trippers.go:580]     Audit-Id: 15c19f03-3b0e-441f-9e47-9be3fb51600a
	I0108 20:29:51.580929  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:51.580934  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:51.580939  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:51.580948  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:51.580954  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:51 GMT
	I0108 20:29:51.581120  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"499","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I0108 20:29:52.079771  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:29:52.079815  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:52.079824  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:52.079830  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:52.082798  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:52.082827  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:52.082836  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:52.082846  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:52 GMT
	I0108 20:29:52.082854  103895 round_trippers.go:580]     Audit-Id: 96bdb451-aa6c-41ed-b5c7-d90519c252de
	I0108 20:29:52.082862  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:52.082871  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:52.082878  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:52.083047  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"499","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I0108 20:29:52.579698  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:29:52.579730  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:52.579738  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:52.579745  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:52.582714  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:52.582737  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:52.582744  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:52 GMT
	I0108 20:29:52.582753  103895 round_trippers.go:580]     Audit-Id: 432273c7-ae86-45fe-8013-a60f4203d427
	I0108 20:29:52.582762  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:52.582772  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:52.582781  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:52.582788  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:52.583023  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"499","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I0108 20:29:53.079712  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:29:53.079749  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:53.079758  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:53.079765  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:53.083065  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:53.083088  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:53.083099  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:53.083109  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:53.083121  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:53 GMT
	I0108 20:29:53.083128  103895 round_trippers.go:580]     Audit-Id: 5e2bb5be-1291-4f01-a888-06d3dbbf0bd2
	I0108 20:29:53.083135  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:53.083142  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:53.083277  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"499","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I0108 20:29:53.579220  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:29:53.579245  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:53.579255  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:53.579262  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:53.581772  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:53.581804  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:53.581814  103895 round_trippers.go:580]     Audit-Id: 7f71190f-7c73-4d3c-a6ef-c93a9d05c9c9
	I0108 20:29:53.581822  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:53.581831  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:53.581840  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:53.581850  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:53.581858  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:53 GMT
	I0108 20:29:53.581990  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"499","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I0108 20:29:53.582306  103895 node_ready.go:58] node "multinode-209824-m02" has status "Ready":"False"
	I0108 20:29:54.079864  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:29:54.079895  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:54.079903  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:54.079909  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:54.082415  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:54.082436  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:54.082443  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:54 GMT
	I0108 20:29:54.082448  103895 round_trippers.go:580]     Audit-Id: 951ba49b-440c-4ce1-b90d-f7e4588b3595
	I0108 20:29:54.082454  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:54.082459  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:54.082464  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:54.082469  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:54.082611  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"499","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I0108 20:29:54.579231  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:29:54.579261  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:54.579269  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:54.579275  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:54.581939  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:54.581968  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:54.581977  103895 round_trippers.go:580]     Audit-Id: 24f8d8cd-1e33-4163-9570-8066ae32dace
	I0108 20:29:54.581983  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:54.581989  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:54.581994  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:54.581999  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:54.582004  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:54 GMT
	I0108 20:29:54.582230  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"499","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I0108 20:29:55.078924  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:29:55.078962  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:55.078970  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:55.078977  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:55.082082  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:55.082114  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:55.082122  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:55.082127  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:55.082133  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:55.082138  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:55 GMT
	I0108 20:29:55.082144  103895 round_trippers.go:580]     Audit-Id: e1665ec7-93d0-4845-9653-2db90eadbe63
	I0108 20:29:55.082149  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:55.082405  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"499","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I0108 20:29:55.578982  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:29:55.579009  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:55.579017  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:55.579023  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:55.581708  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:55.581743  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:55.581754  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:55.581761  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:55.581769  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:55.581776  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:55 GMT
	I0108 20:29:55.581783  103895 round_trippers.go:580]     Audit-Id: a6ca88bf-aa93-40db-a7c0-e7e1dc8bd24f
	I0108 20:29:55.581791  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:55.581938  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"513","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0108 20:29:55.582340  103895 node_ready.go:58] node "multinode-209824-m02" has status "Ready":"False"
	I0108 20:29:56.079720  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:29:56.079749  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:56.079761  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:56.079771  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:56.083091  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:56.083133  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:56.083146  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:56.083155  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:56.083165  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:56.083174  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:56 GMT
	I0108 20:29:56.083184  103895 round_trippers.go:580]     Audit-Id: 0a51e216-1749-4963-8753-a8acc3c1ab45
	I0108 20:29:56.083196  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:56.083397  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"513","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0108 20:29:56.578902  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:29:56.578941  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:56.578950  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:56.578956  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:56.582105  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:56.582136  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:56.582146  103895 round_trippers.go:580]     Audit-Id: f2d3522d-bdff-4f82-af4a-e2510141042c
	I0108 20:29:56.582153  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:56.582159  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:56.582165  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:56.582171  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:56.582176  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:56 GMT
	I0108 20:29:56.582337  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"513","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0108 20:29:57.079022  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:29:57.079059  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:57.079068  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:57.079074  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:57.082286  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:57.082319  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:57.082331  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:57.082339  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:57.082347  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:57.082355  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:57 GMT
	I0108 20:29:57.082362  103895 round_trippers.go:580]     Audit-Id: 0db79adf-c716-4407-bd9f-8130c6374fa2
	I0108 20:29:57.082369  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:57.082545  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"513","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0108 20:29:57.579865  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:29:57.579893  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:57.579901  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:57.579908  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:57.582747  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:57.582781  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:57.582791  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:57.582799  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:57.582807  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:57.582814  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:57.582822  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:57 GMT
	I0108 20:29:57.582834  103895 round_trippers.go:580]     Audit-Id: 8665f447-fbac-4326-951e-5ff12a22a41e
	I0108 20:29:57.583007  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"513","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0108 20:29:57.583381  103895 node_ready.go:58] node "multinode-209824-m02" has status "Ready":"False"
	I0108 20:29:58.079697  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:29:58.079719  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:58.079727  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:58.079733  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:58.082039  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:58.082056  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:58.082065  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:58.082070  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:58.082075  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:58.082080  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:58 GMT
	I0108 20:29:58.082085  103895 round_trippers.go:580]     Audit-Id: afd4623a-87dc-4813-8fdf-15b256d9d90c
	I0108 20:29:58.082092  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:58.082280  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"513","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0108 20:29:58.579178  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:29:58.579210  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:58.579219  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:58.579227  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:58.581656  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:58.581681  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:58.581692  103895 round_trippers.go:580]     Audit-Id: 9bf19c1e-ddbb-4ef2-bebd-b4048a0c75a8
	I0108 20:29:58.581700  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:58.581708  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:58.581719  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:58.581730  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:58.581740  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:58 GMT
	I0108 20:29:58.581869  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"513","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0108 20:29:59.079534  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:29:59.079560  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:59.079573  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:59.079584  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:59.082191  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:29:59.082210  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:59.082217  103895 round_trippers.go:580]     Audit-Id: 69ba8749-54cc-4d71-b57b-aabc556fb430
	I0108 20:29:59.082225  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:59.082234  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:59.082246  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:59.082257  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:59.082268  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:59 GMT
	I0108 20:29:59.082404  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"513","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0108 20:29:59.579095  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:29:59.579132  103895 round_trippers.go:469] Request Headers:
	I0108 20:29:59.579142  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:29:59.579148  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:29:59.582206  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:29:59.582228  103895 round_trippers.go:577] Response Headers:
	I0108 20:29:59.582237  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:29:59.582243  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:29:59.582248  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:29:59 GMT
	I0108 20:29:59.582253  103895 round_trippers.go:580]     Audit-Id: b93dc69f-3938-4aa9-9e85-49d3b2e3a591
	I0108 20:29:59.582258  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:29:59.582263  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:29:59.582423  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"513","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0108 20:30:00.079084  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:00.079122  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:00.079136  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:00.079148  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:00.081976  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:00.082015  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:00.082032  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:00 GMT
	I0108 20:30:00.082040  103895 round_trippers.go:580]     Audit-Id: 03f6be5f-4c87-44f0-b2b9-e8b0d57ecd1b
	I0108 20:30:00.082048  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:00.082058  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:00.082073  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:00.082081  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:00.082219  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"513","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0108 20:30:00.082618  103895 node_ready.go:58] node "multinode-209824-m02" has status "Ready":"False"
	I0108 20:30:00.579867  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:00.579894  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:00.579906  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:00.579915  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:00.582814  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:00.582842  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:00.582854  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:00.582863  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:00 GMT
	I0108 20:30:00.582872  103895 round_trippers.go:580]     Audit-Id: 42b75fd3-4fa5-4bd1-92b7-569e5d5c2811
	I0108 20:30:00.582882  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:00.582891  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:00.582899  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:00.583074  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"513","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0108 20:30:01.079104  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:01.079153  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:01.079166  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:01.079177  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:01.084501  103895 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 20:30:01.084542  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:01.084551  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:01.084558  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:01.084564  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:01.084570  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:01 GMT
	I0108 20:30:01.084575  103895 round_trippers.go:580]     Audit-Id: 5add7938-b2b6-414d-b550-78ac5b4fed15
	I0108 20:30:01.084581  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:01.084794  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:01.579557  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:01.579594  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:01.579604  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:01.579611  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:01.583031  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:01.583061  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:01.583071  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:01.583079  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:01.583094  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:01.583103  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:01.583116  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:01 GMT
	I0108 20:30:01.583128  103895 round_trippers.go:580]     Audit-Id: b6b22d54-23fd-4f6d-8351-8e5df8871246
	I0108 20:30:01.583283  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:02.079931  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:02.079957  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:02.079965  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:02.079975  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:02.082257  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:02.082278  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:02.082289  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:02 GMT
	I0108 20:30:02.082298  103895 round_trippers.go:580]     Audit-Id: 62bca37c-0ee7-4152-9b62-c6a4d4ab7549
	I0108 20:30:02.082307  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:02.082316  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:02.082326  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:02.082393  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:02.082525  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:02.082841  103895 node_ready.go:58] node "multinode-209824-m02" has status "Ready":"False"
	I0108 20:30:02.579139  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:02.579186  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:02.579200  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:02.579209  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:02.581931  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:02.581962  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:02.581971  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:02.581977  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:02.581982  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:02.581988  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:02 GMT
	I0108 20:30:02.581993  103895 round_trippers.go:580]     Audit-Id: f5a81d6a-0f88-465c-b117-eb938bfca8d6
	I0108 20:30:02.581997  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:02.582220  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:03.079899  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:03.079931  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:03.079940  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:03.079945  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:03.082862  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:03.082887  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:03.082898  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:03.082905  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:03.082912  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:03.082920  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:03.082928  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:03 GMT
	I0108 20:30:03.082938  103895 round_trippers.go:580]     Audit-Id: 007adf2a-dc6c-4bfd-b442-cc5b5b736262
	I0108 20:30:03.083127  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:03.579331  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:03.579376  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:03.579385  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:03.579391  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:03.582913  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:03.582937  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:03.582948  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:03.582957  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:03.582968  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:03 GMT
	I0108 20:30:03.582979  103895 round_trippers.go:580]     Audit-Id: eb466c1b-757b-4a25-ac81-0d5a36ecf7ef
	I0108 20:30:03.582992  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:03.583001  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:03.583140  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:04.079753  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:04.079776  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:04.079784  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:04.079791  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:04.082092  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:04.082110  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:04.082117  103895 round_trippers.go:580]     Audit-Id: 2478d138-8392-4412-b16d-e986c1be2c96
	I0108 20:30:04.082123  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:04.082128  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:04.082132  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:04.082137  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:04.082143  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:04 GMT
	I0108 20:30:04.082327  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:04.578983  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:04.579019  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:04.579028  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:04.579038  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:04.581968  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:04.582004  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:04.582012  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:04.582018  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:04.582023  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:04.582029  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:04 GMT
	I0108 20:30:04.582037  103895 round_trippers.go:580]     Audit-Id: 29a6f55a-a95c-4305-a165-dfe049754eb4
	I0108 20:30:04.582044  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:04.582301  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:04.582695  103895 node_ready.go:58] node "multinode-209824-m02" has status "Ready":"False"
	I0108 20:30:05.079911  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:05.079943  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:05.079968  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:05.079976  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:05.083157  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:05.083200  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:05.083213  103895 round_trippers.go:580]     Audit-Id: c3e2821a-af92-4952-a5c2-62c70015155c
	I0108 20:30:05.083220  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:05.083234  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:05.083242  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:05.083250  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:05.083264  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:05 GMT
	I0108 20:30:05.083570  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:05.579220  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:05.579246  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:05.579253  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:05.579259  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:05.581629  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:05.581658  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:05.581668  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:05.581676  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:05.581687  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:05 GMT
	I0108 20:30:05.581696  103895 round_trippers.go:580]     Audit-Id: d3edfd43-70a8-47fc-a1fa-9f658c7729cf
	I0108 20:30:05.581703  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:05.581711  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:05.581892  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:06.079646  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:06.079696  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:06.079705  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:06.079711  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:06.083034  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:06.083059  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:06.083068  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:06.083076  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:06.083084  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:06 GMT
	I0108 20:30:06.083092  103895 round_trippers.go:580]     Audit-Id: a37ebaa2-15ee-4bbf-9f9a-5c8f355bf190
	I0108 20:30:06.083098  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:06.083106  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:06.083221  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:06.579942  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:06.579977  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:06.579987  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:06.579994  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:06.582501  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:06.582529  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:06.582538  103895 round_trippers.go:580]     Audit-Id: e5c593c0-cdf9-4e0a-8a4f-b0d41a90988b
	I0108 20:30:06.582546  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:06.582553  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:06.582559  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:06.582567  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:06.582576  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:06 GMT
	I0108 20:30:06.582742  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:06.583152  103895 node_ready.go:58] node "multinode-209824-m02" has status "Ready":"False"
	I0108 20:30:07.079350  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:07.079423  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:07.079435  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:07.079444  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:07.082639  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:07.082666  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:07.082677  103895 round_trippers.go:580]     Audit-Id: c2cb9d3d-8eb5-4385-bd32-f6695efbc940
	I0108 20:30:07.082687  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:07.082696  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:07.082703  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:07.082712  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:07.082725  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:07 GMT
	I0108 20:30:07.082906  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:07.579747  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:07.579795  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:07.579810  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:07.579821  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:07.582945  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:07.582967  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:07.582975  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:07 GMT
	I0108 20:30:07.582982  103895 round_trippers.go:580]     Audit-Id: 00599cf4-3da4-4d3e-b6d4-88b65b496cac
	I0108 20:30:07.582991  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:07.582999  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:07.583007  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:07.583016  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:07.583193  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:08.079933  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:08.079990  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:08.080003  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:08.080013  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:08.083236  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:08.083267  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:08.083277  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:08.083286  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:08.083296  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:08 GMT
	I0108 20:30:08.083305  103895 round_trippers.go:580]     Audit-Id: ca5b39ee-fe8b-408b-b065-d776f4dd40d9
	I0108 20:30:08.083313  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:08.083322  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:08.083494  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:08.579334  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:08.579387  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:08.579397  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:08.579403  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:08.582310  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:08.582345  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:08.582356  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:08 GMT
	I0108 20:30:08.582367  103895 round_trippers.go:580]     Audit-Id: 63f07fc9-8de6-4554-9490-9c878c8e2121
	I0108 20:30:08.582376  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:08.582385  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:08.582397  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:08.582405  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:08.582573  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:09.079232  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:09.079264  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:09.079272  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:09.079279  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:09.082273  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:09.082305  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:09.082314  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:09.082323  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:09 GMT
	I0108 20:30:09.082333  103895 round_trippers.go:580]     Audit-Id: a6863c18-c0c0-475d-9f2e-c5389f8b04ff
	I0108 20:30:09.082341  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:09.082348  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:09.082361  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:09.082546  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:09.082997  103895 node_ready.go:58] node "multinode-209824-m02" has status "Ready":"False"
	I0108 20:30:09.579117  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:09.579145  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:09.579153  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:09.579159  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:09.584354  103895 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 20:30:09.584395  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:09.584408  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:09.584417  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:09.584425  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:09.584434  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:09.584443  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:09 GMT
	I0108 20:30:09.584452  103895 round_trippers.go:580]     Audit-Id: 59ef52eb-331b-461d-8596-7d2a673d7b1a
	I0108 20:30:09.584757  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:10.079516  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:10.079563  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:10.079576  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:10.079588  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:10.083008  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:10.083059  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:10.083071  103895 round_trippers.go:580]     Audit-Id: d38c7e4b-5919-4ed4-b748-de93de295040
	I0108 20:30:10.083080  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:10.083087  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:10.083094  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:10.083103  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:10.083111  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:10 GMT
	I0108 20:30:10.083311  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:10.578985  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:10.579031  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:10.579040  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:10.579046  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:10.582413  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:10.582447  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:10.582457  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:10.582464  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:10.582472  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:10 GMT
	I0108 20:30:10.582479  103895 round_trippers.go:580]     Audit-Id: 4f6d9d00-d7a6-4627-a486-64e20b9e049c
	I0108 20:30:10.582486  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:10.582493  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:10.582652  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:11.079661  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:11.079695  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:11.079704  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:11.079710  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:11.083222  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:11.083266  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:11.083280  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:11.083291  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:11.083300  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:11 GMT
	I0108 20:30:11.083309  103895 round_trippers.go:580]     Audit-Id: 8dac0f70-79dd-45ce-af15-331d648e3852
	I0108 20:30:11.083316  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:11.083324  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:11.083524  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:11.084022  103895 node_ready.go:58] node "multinode-209824-m02" has status "Ready":"False"
	I0108 20:30:11.579093  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:11.579126  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:11.579136  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:11.579145  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:11.582654  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:11.582694  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:11.582705  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:11.582715  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:11.582723  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:11.582730  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:11 GMT
	I0108 20:30:11.582741  103895 round_trippers.go:580]     Audit-Id: 9e02cc95-0194-44f1-a1e0-dc82ff673b8b
	I0108 20:30:11.582751  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:11.582921  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:12.079173  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:12.079207  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:12.079216  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:12.079222  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:12.082459  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:12.082490  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:12.082504  103895 round_trippers.go:580]     Audit-Id: 1267c77d-45a4-4e29-8c8a-18d430c9666a
	I0108 20:30:12.082514  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:12.082522  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:12.082529  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:12.082545  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:12.082558  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:12 GMT
	I0108 20:30:12.082729  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:12.579476  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:12.579510  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:12.579525  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:12.579536  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:12.582627  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:12.582655  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:12.582666  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:12 GMT
	I0108 20:30:12.582672  103895 round_trippers.go:580]     Audit-Id: d535bd27-8c41-4d6e-9f8f-5d557ddc1182
	I0108 20:30:12.582677  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:12.582683  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:12.582693  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:12.582701  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:12.582946  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:13.079759  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:13.079797  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:13.079806  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:13.079812  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:13.083017  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:13.083053  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:13.083061  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:13 GMT
	I0108 20:30:13.083067  103895 round_trippers.go:580]     Audit-Id: 33ce58e9-568d-4f2b-8391-a25550608957
	I0108 20:30:13.083073  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:13.083078  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:13.083083  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:13.083092  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:13.083281  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:13.579198  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:13.579242  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:13.579253  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:13.579268  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:13.582511  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:13.582543  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:13.582551  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:13.582557  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:13 GMT
	I0108 20:30:13.582563  103895 round_trippers.go:580]     Audit-Id: de69f80d-e52a-4d4d-ab94-6c2e135b04bc
	I0108 20:30:13.582571  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:13.582578  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:13.582586  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:13.582804  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:13.583202  103895 node_ready.go:58] node "multinode-209824-m02" has status "Ready":"False"
	I0108 20:30:14.079485  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:14.079516  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:14.079526  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:14.079544  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:14.082935  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:14.082964  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:14.082972  103895 round_trippers.go:580]     Audit-Id: 8f8a960e-cbab-4df4-b3c8-2cf747cb1d00
	I0108 20:30:14.082978  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:14.082983  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:14.082989  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:14.082995  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:14.083000  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:14 GMT
	I0108 20:30:14.083240  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:14.578970  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:14.579021  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:14.579034  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:14.579044  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:14.582023  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:14.582055  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:14.582064  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:14.582072  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:14.582079  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:14.582086  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:14 GMT
	I0108 20:30:14.582094  103895 round_trippers.go:580]     Audit-Id: 1da22642-dc3e-42f2-858c-677a7c74c898
	I0108 20:30:14.582101  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:14.582250  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:15.078885  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:15.078922  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:15.078931  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:15.078939  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:15.081389  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:15.081411  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:15.081421  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:15.081428  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:15 GMT
	I0108 20:30:15.081435  103895 round_trippers.go:580]     Audit-Id: 14592681-8562-4528-bef5-bf1e1a669ce6
	I0108 20:30:15.081442  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:15.081449  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:15.081459  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:15.081641  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:15.579217  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:15.579248  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:15.579259  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:15.579268  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:15.582350  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:15.582386  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:15.582397  103895 round_trippers.go:580]     Audit-Id: 19fec16c-99fa-4b5e-b018-0fed84e436e5
	I0108 20:30:15.582406  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:15.582413  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:15.582421  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:15.582431  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:15.582441  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:15 GMT
	I0108 20:30:15.582595  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:16.079161  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:16.079200  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:16.079216  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:16.079224  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:16.081919  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:16.081950  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:16.081958  103895 round_trippers.go:580]     Audit-Id: 44e424e5-b529-4cc3-aa84-461927f4275f
	I0108 20:30:16.081964  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:16.081970  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:16.081977  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:16.081984  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:16.081992  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:16 GMT
	I0108 20:30:16.082195  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:16.082627  103895 node_ready.go:58] node "multinode-209824-m02" has status "Ready":"False"
	I0108 20:30:16.579915  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:16.579953  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:16.579963  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:16.579971  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:16.583268  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:16.583302  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:16.583312  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:16.583321  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:16.583328  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:16.583336  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:16.583345  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:16 GMT
	I0108 20:30:16.583352  103895 round_trippers.go:580]     Audit-Id: 3ede0478-239c-4dd8-9d15-c56e659c3a0e
	I0108 20:30:16.583618  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:17.079251  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:17.079277  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:17.079285  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:17.079291  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:17.081790  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:17.081819  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:17.081829  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:17.081838  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:17.081845  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:17 GMT
	I0108 20:30:17.081855  103895 round_trippers.go:580]     Audit-Id: 78e34445-1d31-4fcc-bcc5-078c4b9b3b7e
	I0108 20:30:17.081861  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:17.081872  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:17.082000  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:17.579782  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:17.579815  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:17.579823  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:17.579829  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:17.583170  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:17.583201  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:17.583212  103895 round_trippers.go:580]     Audit-Id: e638364f-c286-4afe-820c-c6077e5283a2
	I0108 20:30:17.583220  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:17.583228  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:17.583235  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:17.583243  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:17.583250  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:17 GMT
	I0108 20:30:17.583419  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:18.079082  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:18.079121  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:18.079130  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:18.079137  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:18.082215  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:18.082252  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:18.082263  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:18.082271  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:18.082279  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:18 GMT
	I0108 20:30:18.082287  103895 round_trippers.go:580]     Audit-Id: 2b474ddc-4290-4732-8d83-7ef11e2b02dc
	I0108 20:30:18.082295  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:18.082304  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:18.082445  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:18.082798  103895 node_ready.go:58] node "multinode-209824-m02" has status "Ready":"False"
	I0108 20:30:18.579330  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:18.579387  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:18.579402  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:18.579410  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:18.582922  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:18.582961  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:18.582974  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:18 GMT
	I0108 20:30:18.582984  103895 round_trippers.go:580]     Audit-Id: a7f1da0f-f919-443a-a3e6-aefc5bee6f2c
	I0108 20:30:18.582993  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:18.583001  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:18.583008  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:18.583015  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:18.583208  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:19.079853  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:19.079877  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:19.079884  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:19.079890  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:19.082701  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:19.082735  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:19.082745  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:19.082758  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:19.082766  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:19.082775  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:19.082783  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:19 GMT
	I0108 20:30:19.082790  103895 round_trippers.go:580]     Audit-Id: fe4fb05a-ab76-4f51-8436-981f0153a3aa
	I0108 20:30:19.082945  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:19.579650  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:19.579686  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:19.579698  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:19.579708  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:19.582709  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:19.582737  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:19.582744  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:19.582750  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:19.582758  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:19.582766  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:19.582774  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:19 GMT
	I0108 20:30:19.582782  103895 round_trippers.go:580]     Audit-Id: b4c3a0eb-b5fe-42fe-8fc1-17bbda6d1534
	I0108 20:30:19.583029  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:20.079665  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:20.079691  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:20.079700  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:20.079706  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:20.082577  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:20.082604  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:20.082614  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:20 GMT
	I0108 20:30:20.082621  103895 round_trippers.go:580]     Audit-Id: 3db392c5-221c-4683-9802-90ac68e4924a
	I0108 20:30:20.082629  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:20.082638  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:20.082650  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:20.082658  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:20.082794  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:20.083277  103895 node_ready.go:58] node "multinode-209824-m02" has status "Ready":"False"
	I0108 20:30:20.579586  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:20.579612  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:20.579620  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:20.579626  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:20.582271  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:20.582303  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:20.582312  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:20.582318  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:20.582323  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:20 GMT
	I0108 20:30:20.582328  103895 round_trippers.go:580]     Audit-Id: 997d99d3-0a9f-4b0b-9327-c199a24399b9
	I0108 20:30:20.582334  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:20.582339  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:20.582464  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:21.079167  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:21.079193  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:21.079201  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:21.079207  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:21.082017  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:21.082065  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:21.082078  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:21.082088  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:21.082096  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:21 GMT
	I0108 20:30:21.082103  103895 round_trippers.go:580]     Audit-Id: 287d65a0-be55-47ba-acca-5c117c668fc2
	I0108 20:30:21.082111  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:21.082120  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:21.082272  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:21.578861  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:21.578904  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:21.578913  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:21.578919  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:21.581852  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:21.581880  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:21.581891  103895 round_trippers.go:580]     Audit-Id: d913ab73-7d58-4b64-b2d4-5c1df7ec5f0c
	I0108 20:30:21.581899  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:21.581907  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:21.581915  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:21.581923  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:21.581930  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:21 GMT
	I0108 20:30:21.582056  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:22.079816  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:22.079854  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:22.079866  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:22.079877  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:22.083283  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:22.083316  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:22.083331  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:22.083344  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:22.083351  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:22.083385  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:22.083395  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:22 GMT
	I0108 20:30:22.083404  103895 round_trippers.go:580]     Audit-Id: b0c46259-730c-4367-b8ef-ebefb84ce49a
	I0108 20:30:22.083566  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:22.083934  103895 node_ready.go:58] node "multinode-209824-m02" has status "Ready":"False"
	I0108 20:30:22.579095  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:22.579133  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:22.579146  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:22.579156  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:22.582084  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:22.582111  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:22.582122  103895 round_trippers.go:580]     Audit-Id: bb7016c4-4b06-4067-b6f5-f821097b3630
	I0108 20:30:22.582129  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:22.582139  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:22.582146  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:22.582155  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:22.582163  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:22 GMT
	I0108 20:30:22.582358  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:23.079045  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:23.079084  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:23.079099  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:23.079107  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:23.082422  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:23.082462  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:23.082474  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:23.082481  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:23.082491  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:23.082500  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:23 GMT
	I0108 20:30:23.082507  103895 round_trippers.go:580]     Audit-Id: 0fdebe4f-55b9-46ce-8efa-64efab08535e
	I0108 20:30:23.082515  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:23.082663  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:23.579634  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:23.579690  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:23.579698  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:23.579705  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:23.582358  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:23.582395  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:23.582412  103895 round_trippers.go:580]     Audit-Id: b0322fe6-4fde-4871-9b12-1d70ce15ad81
	I0108 20:30:23.582426  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:23.582436  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:23.582445  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:23.582454  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:23.582464  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:23 GMT
	I0108 20:30:23.582585  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:24.079175  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:24.079209  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:24.079218  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:24.079225  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:24.082310  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:24.082347  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:24.082356  103895 round_trippers.go:580]     Audit-Id: 7a986c4e-5ac8-46c2-b140-fc083d9a3768
	I0108 20:30:24.082367  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:24.082373  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:24.082379  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:24.082385  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:24.082390  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:24 GMT
	I0108 20:30:24.082608  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:24.579256  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:24.579300  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:24.579312  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:24.579321  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:24.582443  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:24.582477  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:24.582485  103895 round_trippers.go:580]     Audit-Id: b5b2b1da-974a-451e-82aa-68a708f1a3ae
	I0108 20:30:24.582493  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:24.582500  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:24.582507  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:24.582515  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:24.582535  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:24 GMT
	I0108 20:30:24.582697  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:24.583191  103895 node_ready.go:58] node "multinode-209824-m02" has status "Ready":"False"
	I0108 20:30:25.079435  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:25.079471  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:25.079483  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:25.079491  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:25.083193  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:25.083225  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:25.083233  103895 round_trippers.go:580]     Audit-Id: cf7731c2-d02b-4eb7-83ab-e855198c8fe0
	I0108 20:30:25.083239  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:25.083245  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:25.083250  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:25.083258  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:25.083264  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:25 GMT
	I0108 20:30:25.083484  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:25.578996  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:25.579033  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:25.579056  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:25.579062  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:25.582616  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:25.582651  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:25.582664  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:25.582673  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:25.582682  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:25 GMT
	I0108 20:30:25.582690  103895 round_trippers.go:580]     Audit-Id: 651a62fc-b8d4-4023-b136-4e095f49a70c
	I0108 20:30:25.582698  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:25.582705  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:25.582871  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:26.079642  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:26.079683  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:26.079696  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:26.079705  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:26.082811  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:26.082830  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:26.082841  103895 round_trippers.go:580]     Audit-Id: 439ab446-f95e-4fdf-b713-b095160bc604
	I0108 20:30:26.082847  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:26.082852  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:26.082857  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:26.082862  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:26.082867  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:26 GMT
	I0108 20:30:26.083032  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:26.579781  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:26.579824  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:26.579833  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:26.579839  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:26.582815  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:26.582841  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:26.582850  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:26 GMT
	I0108 20:30:26.582858  103895 round_trippers.go:580]     Audit-Id: 70494c30-79a8-4255-8a57-b69920cdd3e7
	I0108 20:30:26.582866  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:26.582873  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:26.582882  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:26.582890  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:26.583050  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:26.583347  103895 node_ready.go:58] node "multinode-209824-m02" has status "Ready":"False"
	I0108 20:30:27.079724  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:27.079746  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:27.079754  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:27.079762  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:27.082129  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:27.082148  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:27.082155  103895 round_trippers.go:580]     Audit-Id: 5e0dbb3c-a1f8-4606-b42b-910cdb15591b
	I0108 20:30:27.082160  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:27.082165  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:27.082170  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:27.082176  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:27.082181  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:27 GMT
	I0108 20:30:27.082335  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:27.578941  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:27.578968  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:27.578976  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:27.578982  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:27.581297  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:27.581318  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:27.581325  103895 round_trippers.go:580]     Audit-Id: 0e9395a1-1b95-4f59-adb5-bc1eace8a319
	I0108 20:30:27.581330  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:27.581335  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:27.581346  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:27.581351  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:27.581357  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:27 GMT
	I0108 20:30:27.581501  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:28.079128  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:28.079164  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:28.079174  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:28.079181  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:28.082750  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:28.082778  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:28.082786  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:28.082792  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:28.082797  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:28.082804  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:28 GMT
	I0108 20:30:28.082810  103895 round_trippers.go:580]     Audit-Id: 540c5ad1-ceb2-4f7f-b656-391fa80c3176
	I0108 20:30:28.082815  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:28.083041  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:28.579894  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:28.579920  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:28.579928  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:28.579934  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:28.583015  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:28.583057  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:28.583070  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:28.583081  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:28.583091  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:28.583101  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:28 GMT
	I0108 20:30:28.583108  103895 round_trippers.go:580]     Audit-Id: 3a13edbf-569e-4204-a448-9c752573679f
	I0108 20:30:28.583114  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:28.583291  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:28.583710  103895 node_ready.go:58] node "multinode-209824-m02" has status "Ready":"False"
	I0108 20:30:29.079866  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:29.079889  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:29.079897  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:29.079903  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:29.082630  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:29.082663  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:29.082676  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:29.082687  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:29.082696  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:29 GMT
	I0108 20:30:29.082704  103895 round_trippers.go:580]     Audit-Id: ba2f0c27-196c-46f7-810e-ec294b88d66d
	I0108 20:30:29.082712  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:29.082721  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:29.082896  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:29.579657  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:29.579690  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:29.579699  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:29.579705  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:29.582649  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:29.582669  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:29.582676  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:29.582681  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:29.582687  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:29.582692  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:29 GMT
	I0108 20:30:29.582698  103895 round_trippers.go:580]     Audit-Id: 44af6e4d-14e7-415e-b2ab-d142a5419f47
	I0108 20:30:29.582703  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:29.582837  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:30.079639  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:30.079674  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:30.079683  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:30.079689  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:30.083252  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:30.083281  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:30.083290  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:30 GMT
	I0108 20:30:30.083295  103895 round_trippers.go:580]     Audit-Id: 9b374450-a0db-4359-8a56-a5d2769ca801
	I0108 20:30:30.083301  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:30.083306  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:30.083312  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:30.083317  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:30.083523  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:30.578994  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:30.579036  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:30.579045  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:30.579051  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:30.581572  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:30.581603  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:30.581616  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:30 GMT
	I0108 20:30:30.581624  103895 round_trippers.go:580]     Audit-Id: 07c8e42e-2baf-47ca-b314-ba45c6af82c0
	I0108 20:30:30.581631  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:30.581639  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:30.581646  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:30.581654  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:30.581802  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:31.079827  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:31.079848  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:31.079856  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:31.079862  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:31.082261  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:31.082284  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:31.082291  103895 round_trippers.go:580]     Audit-Id: 673b3104-2177-4a92-932d-3109156db95f
	I0108 20:30:31.082296  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:31.082302  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:31.082307  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:31.082314  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:31.082359  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:31 GMT
	I0108 20:30:31.082514  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:31.082847  103895 node_ready.go:58] node "multinode-209824-m02" has status "Ready":"False"
	I0108 20:30:31.579142  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:31.579184  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:31.579198  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:31.579211  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:31.581918  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:31.581952  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:31.581964  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:31.581982  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:31.581988  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:31 GMT
	I0108 20:30:31.581993  103895 round_trippers.go:580]     Audit-Id: 66d64d38-9283-44bd-8cba-6da016a6b656
	I0108 20:30:31.581998  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:31.582005  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:31.582230  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:32.078927  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:32.078954  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:32.078962  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:32.078968  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:32.081983  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:32.082015  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:32.082026  103895 round_trippers.go:580]     Audit-Id: a0ed6c47-dc9c-4338-b267-02854b84c830
	I0108 20:30:32.082035  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:32.082042  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:32.082059  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:32.082067  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:32.082080  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:32 GMT
	I0108 20:30:32.082198  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:32.579873  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:32.579901  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:32.579909  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:32.579915  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:32.582699  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:32.582728  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:32.582736  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:32.582742  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:32.582747  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:32.582753  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:32.582758  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:32 GMT
	I0108 20:30:32.582763  103895 round_trippers.go:580]     Audit-Id: dd65295c-0c33-4b08-b7dc-148a17d67d7b
	I0108 20:30:32.582995  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:33.079804  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:33.079842  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:33.079852  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:33.079859  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:33.082707  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:33.082732  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:33.082742  103895 round_trippers.go:580]     Audit-Id: 9552066b-db7a-4a67-bb31-8c2c53040e15
	I0108 20:30:33.082749  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:33.082756  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:33.082764  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:33.082771  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:33.082779  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:33 GMT
	I0108 20:30:33.082961  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:33.083294  103895 node_ready.go:58] node "multinode-209824-m02" has status "Ready":"False"
	I0108 20:30:33.579295  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:33.579329  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:33.579337  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:33.579344  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:33.582736  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:33.582763  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:33.582774  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:33.582782  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:33.582788  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:33 GMT
	I0108 20:30:33.582798  103895 round_trippers.go:580]     Audit-Id: 03eb954f-4652-41b1-86b6-85c8714a231c
	I0108 20:30:33.582805  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:33.582813  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:33.582938  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:34.079586  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:34.079617  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:34.079629  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:34.079639  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:34.082009  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:34.082035  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:34.082046  103895 round_trippers.go:580]     Audit-Id: c7b381e8-23e5-4167-aac9-3b435a11d940
	I0108 20:30:34.082055  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:34.082064  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:34.082072  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:34.082078  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:34.082084  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:34 GMT
	I0108 20:30:34.082237  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:34.578981  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:34.579025  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:34.579033  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:34.579039  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:34.581838  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:34.581871  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:34.581883  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:34 GMT
	I0108 20:30:34.581891  103895 round_trippers.go:580]     Audit-Id: 99ba9821-466f-4bc2-a49c-49bb358fc2bc
	I0108 20:30:34.581903  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:34.581910  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:34.581917  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:34.581925  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:34.582100  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:35.079707  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:35.079734  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:35.079743  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:35.079750  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:35.082475  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:35.082498  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:35.082505  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:35.082511  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:35.082516  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:35.082521  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:35 GMT
	I0108 20:30:35.082533  103895 round_trippers.go:580]     Audit-Id: 6a67ad10-c441-4c9f-8203-0b87f6fec855
	I0108 20:30:35.082542  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:35.082683  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:35.579476  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:35.579503  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:35.579511  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:35.579517  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:35.582184  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:35.582210  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:35.582220  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:35.582229  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:35.582238  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:35 GMT
	I0108 20:30:35.582254  103895 round_trippers.go:580]     Audit-Id: 49175188-df08-4364-928c-2dc4b5d6505c
	I0108 20:30:35.582259  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:35.582267  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:35.582450  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:35.582954  103895 node_ready.go:58] node "multinode-209824-m02" has status "Ready":"False"
	I0108 20:30:36.078984  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:36.079005  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:36.079013  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:36.079019  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:36.082016  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:36.082054  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:36.082066  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:36.082075  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:36.082085  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:36 GMT
	I0108 20:30:36.082095  103895 round_trippers.go:580]     Audit-Id: 09f78bb8-ec18-4682-9deb-e6882daf6584
	I0108 20:30:36.082100  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:36.082106  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:36.082249  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:36.579680  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:36.579712  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:36.579725  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:36.579734  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:36.582033  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:36.582053  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:36.582060  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:36.582065  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:36.582071  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:36.582078  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:36 GMT
	I0108 20:30:36.582086  103895 round_trippers.go:580]     Audit-Id: 4d78b2d9-8da4-4061-a477-7638ae055028
	I0108 20:30:36.582095  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:36.582266  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"521","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0108 20:30:37.079778  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:37.079801  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:37.079808  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:37.079814  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:37.082994  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:37.083035  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:37.083044  103895 round_trippers.go:580]     Audit-Id: 3af6c8e0-ea1a-4183-9690-fdc4e1bddbe1
	I0108 20:30:37.083051  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:37.083056  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:37.083062  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:37.083068  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:37.083076  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:37 GMT
	I0108 20:30:37.083301  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"567","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5848 chars]
	I0108 20:30:37.083713  103895 node_ready.go:49] node "multinode-209824-m02" has status "Ready":"True"
	I0108 20:30:37.083735  103895 node_ready.go:38] duration metric: took 45.505049395s waiting for node "multinode-209824-m02" to be "Ready" ...
	I0108 20:30:37.083747  103895 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:30:37.083808  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 20:30:37.083817  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:37.083824  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:37.083831  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:37.091015  103895 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0108 20:30:37.091054  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:37.091067  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:37.091076  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:37 GMT
	I0108 20:30:37.091091  103895 round_trippers.go:580]     Audit-Id: d83af9a0-9db9-4620-85a2-bc5e9355ed7d
	I0108 20:30:37.091100  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:37.091109  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:37.091116  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:37.091678  103895 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"567"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ds62v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"926641e0-32b5-4e31-8361-c677061ec067","resourceVersion":"453","creationTimestamp":"2024-01-08T20:29:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f4c279e9-4bc0-4d2f-a359-efd57f369ce4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4c279e9-4bc0-4d2f-a359-efd57f369ce4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I0108 20:30:37.093834  103895 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ds62v" in "kube-system" namespace to be "Ready" ...
	I0108 20:30:37.093954  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ds62v
	I0108 20:30:37.093964  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:37.093973  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:37.093982  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:37.097002  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:37.097027  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:37.097036  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:37.097044  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:37.097052  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:37 GMT
	I0108 20:30:37.097059  103895 round_trippers.go:580]     Audit-Id: 5740835f-1384-4714-bfe0-9da533b3909e
	I0108 20:30:37.097067  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:37.097074  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:37.097236  103895 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ds62v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"926641e0-32b5-4e31-8361-c677061ec067","resourceVersion":"453","creationTimestamp":"2024-01-08T20:29:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"f4c279e9-4bc0-4d2f-a359-efd57f369ce4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f4c279e9-4bc0-4d2f-a359-efd57f369ce4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0108 20:30:37.097660  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:30:37.097671  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:37.097678  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:37.097683  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:37.099800  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:37.099818  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:37.099825  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:37 GMT
	I0108 20:30:37.099830  103895 round_trippers.go:580]     Audit-Id: 2130547f-ebdc-4007-9b31-2fee09c7150a
	I0108 20:30:37.099835  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:37.099843  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:37.099851  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:37.099862  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:37.100006  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"434","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 20:30:37.100295  103895 pod_ready.go:92] pod "coredns-5dd5756b68-ds62v" in "kube-system" namespace has status "Ready":"True"
	I0108 20:30:37.100311  103895 pod_ready.go:81] duration metric: took 6.444854ms waiting for pod "coredns-5dd5756b68-ds62v" in "kube-system" namespace to be "Ready" ...
	I0108 20:30:37.100319  103895 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-209824" in "kube-system" namespace to be "Ready" ...
	I0108 20:30:37.100372  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-209824
	I0108 20:30:37.100379  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:37.100386  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:37.100392  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:37.102397  103895 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 20:30:37.102420  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:37.102432  103895 round_trippers.go:580]     Audit-Id: 377e0212-d284-4237-824a-76e3f897f121
	I0108 20:30:37.102440  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:37.102448  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:37.102457  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:37.102470  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:37.102479  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:37 GMT
	I0108 20:30:37.102582  103895 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-209824","namespace":"kube-system","uid":"2ba4f928-8212-4851-a0d0-ecb6766b0d38","resourceVersion":"424","creationTimestamp":"2024-01-08T20:28:46Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"efd65551b99a6b027e44ff3fe2a4bac4","kubernetes.io/config.mirror":"efd65551b99a6b027e44ff3fe2a4bac4","kubernetes.io/config.seen":"2024-01-08T20:28:40.985955037Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:28:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0108 20:30:37.103040  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:30:37.103053  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:37.103060  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:37.103066  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:37.105259  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:37.105282  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:37.105293  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:37.105301  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:37.105310  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:37.105321  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:37 GMT
	I0108 20:30:37.105330  103895 round_trippers.go:580]     Audit-Id: 77fffd00-fea8-4287-8640-371cb671c78e
	I0108 20:30:37.105336  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:37.105474  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"434","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 20:30:37.105855  103895 pod_ready.go:92] pod "etcd-multinode-209824" in "kube-system" namespace has status "Ready":"True"
	I0108 20:30:37.105873  103895 pod_ready.go:81] duration metric: took 5.544959ms waiting for pod "etcd-multinode-209824" in "kube-system" namespace to be "Ready" ...
	I0108 20:30:37.105892  103895 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-209824" in "kube-system" namespace to be "Ready" ...
	I0108 20:30:37.105965  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-209824
	I0108 20:30:37.105975  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:37.105984  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:37.105992  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:37.107911  103895 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 20:30:37.107927  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:37.107934  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:37.107942  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:37.107950  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:37.107959  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:37.107976  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:37 GMT
	I0108 20:30:37.107985  103895 round_trippers.go:580]     Audit-Id: 15d5dab5-459a-4398-8672-6d2eb7c2b791
	I0108 20:30:37.108158  103895 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-209824","namespace":"kube-system","uid":"aceea26b-3461-4240-979e-c8aa9f77e8fb","resourceVersion":"398","creationTimestamp":"2024-01-08T20:28:47Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"cf7593793b45828fd4be9343f68cfb68","kubernetes.io/config.mirror":"cf7593793b45828fd4be9343f68cfb68","kubernetes.io/config.seen":"2024-01-08T20:28:47.413857407Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0108 20:30:37.108821  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:30:37.108839  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:37.108850  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:37.108860  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:37.111274  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:37.111308  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:37.111321  103895 round_trippers.go:580]     Audit-Id: bf2fafda-40eb-4e39-8ad2-b5b1d7ec4212
	I0108 20:30:37.111330  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:37.111339  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:37.111348  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:37.111382  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:37.111398  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:37 GMT
	I0108 20:30:37.111507  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"434","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 20:30:37.111873  103895 pod_ready.go:92] pod "kube-apiserver-multinode-209824" in "kube-system" namespace has status "Ready":"True"
	I0108 20:30:37.111890  103895 pod_ready.go:81] duration metric: took 5.986841ms waiting for pod "kube-apiserver-multinode-209824" in "kube-system" namespace to be "Ready" ...
	I0108 20:30:37.111899  103895 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-209824" in "kube-system" namespace to be "Ready" ...
	I0108 20:30:37.111957  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-209824
	I0108 20:30:37.111964  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:37.111971  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:37.111977  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:37.114324  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:37.114342  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:37.114349  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:37 GMT
	I0108 20:30:37.114360  103895 round_trippers.go:580]     Audit-Id: 47ed93c8-cd43-4d4f-9004-26ff56e53a38
	I0108 20:30:37.114368  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:37.114376  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:37.114384  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:37.114392  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:37.114596  103895 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-209824","namespace":"kube-system","uid":"9e898128-2e03-41f5-8afc-23b34ee9e755","resourceVersion":"422","creationTimestamp":"2024-01-08T20:28:47Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e91228d8e0f7fca8a3a21815e52a30df","kubernetes.io/config.mirror":"e91228d8e0f7fca8a3a21815e52a30df","kubernetes.io/config.seen":"2024-01-08T20:28:47.413859105Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0108 20:30:37.115226  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:30:37.115243  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:37.115255  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:37.115267  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:37.117960  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:37.117981  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:37.117988  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:37.117994  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:37.117999  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:37 GMT
	I0108 20:30:37.118005  103895 round_trippers.go:580]     Audit-Id: ce917e41-b9c9-4635-97cb-6af245494874
	I0108 20:30:37.118010  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:37.118015  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:37.118130  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"434","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 20:30:37.118473  103895 pod_ready.go:92] pod "kube-controller-manager-multinode-209824" in "kube-system" namespace has status "Ready":"True"
	I0108 20:30:37.118492  103895 pod_ready.go:81] duration metric: took 6.58412ms waiting for pod "kube-controller-manager-multinode-209824" in "kube-system" namespace to be "Ready" ...
	I0108 20:30:37.118507  103895 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7gtj2" in "kube-system" namespace to be "Ready" ...
	I0108 20:30:37.279929  103895 request.go:629] Waited for 161.319379ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7gtj2
	I0108 20:30:37.280018  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7gtj2
	I0108 20:30:37.280023  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:37.280030  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:37.280048  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:37.283674  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:37.283721  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:37.283730  103895 round_trippers.go:580]     Audit-Id: 906e6e83-8fc5-4e60-826f-d769b0423e9d
	I0108 20:30:37.283738  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:37.283746  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:37.283754  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:37.283765  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:37.283774  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:37 GMT
	I0108 20:30:37.283921  103895 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7gtj2","generateName":"kube-proxy-","namespace":"kube-system","uid":"1dfab8e6-1b8d-491a-8d1e-a9e32af7c603","resourceVersion":"536","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b9cbae20-0613-456c-9bd2-0a174674a6ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9cbae20-0613-456c-9bd2-0a174674a6ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0108 20:30:37.479904  103895 request.go:629] Waited for 195.328613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:37.480006  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824-m02
	I0108 20:30:37.480012  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:37.480021  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:37.480027  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:37.483429  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:37.483459  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:37.483471  103895 round_trippers.go:580]     Audit-Id: 416dbb60-6dba-4622-a5cc-6652bf4efe92
	I0108 20:30:37.483478  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:37.483486  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:37.483493  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:37.483501  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:37.483509  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:37 GMT
	I0108 20:30:37.483680  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824-m02","uid":"3507ede4-4076-430c-820a-b747e32382bd","resourceVersion":"567","creationTimestamp":"2024-01-08T20:29:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_29_51_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5848 chars]
	I0108 20:30:37.484125  103895 pod_ready.go:92] pod "kube-proxy-7gtj2" in "kube-system" namespace has status "Ready":"True"
	I0108 20:30:37.484153  103895 pod_ready.go:81] duration metric: took 365.637674ms waiting for pod "kube-proxy-7gtj2" in "kube-system" namespace to be "Ready" ...
	I0108 20:30:37.484166  103895 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s267w" in "kube-system" namespace to be "Ready" ...
	I0108 20:30:37.680073  103895 request.go:629] Waited for 195.78918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s267w
	I0108 20:30:37.680182  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s267w
	I0108 20:30:37.680188  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:37.680196  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:37.680203  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:37.683761  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:37.683791  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:37.683802  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:37 GMT
	I0108 20:30:37.683810  103895 round_trippers.go:580]     Audit-Id: 24ddd56c-c835-4cab-83ad-850d4350ab74
	I0108 20:30:37.683817  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:37.683825  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:37.683832  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:37.683841  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:37.684055  103895 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s267w","generateName":"kube-proxy-","namespace":"kube-system","uid":"825c87c7-7b31-44a0-9009-1603f045b6a8","resourceVersion":"409","creationTimestamp":"2024-01-08T20:29:00Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b9cbae20-0613-456c-9bd2-0a174674a6ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:29:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9cbae20-0613-456c-9bd2-0a174674a6ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0108 20:30:37.880003  103895 request.go:629] Waited for 195.343868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:30:37.880103  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:30:37.880114  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:37.880125  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:37.880135  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:37.882406  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:37.882425  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:37.882431  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:37 GMT
	I0108 20:30:37.882437  103895 round_trippers.go:580]     Audit-Id: 7c1922bc-141e-4a70-b955-7c8ee1a289e7
	I0108 20:30:37.882443  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:37.882450  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:37.882458  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:37.882467  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:37.882674  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"434","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 20:30:37.883087  103895 pod_ready.go:92] pod "kube-proxy-s267w" in "kube-system" namespace has status "Ready":"True"
	I0108 20:30:37.883104  103895 pod_ready.go:81] duration metric: took 398.929034ms waiting for pod "kube-proxy-s267w" in "kube-system" namespace to be "Ready" ...
	I0108 20:30:37.883117  103895 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-209824" in "kube-system" namespace to be "Ready" ...
	I0108 20:30:38.080125  103895 request.go:629] Waited for 196.910299ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-209824
	I0108 20:30:38.080192  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-209824
	I0108 20:30:38.080199  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:38.080211  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:38.080226  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:38.082755  103895 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:30:38.082785  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:38.082798  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:38.082807  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:38 GMT
	I0108 20:30:38.082815  103895 round_trippers.go:580]     Audit-Id: f278c7c7-cdaa-4c0c-8b9a-b591f6ed619e
	I0108 20:30:38.082823  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:38.082833  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:38.082842  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:38.082969  103895 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-209824","namespace":"kube-system","uid":"dfd223e6-f902-4432-bdd8-b39f4c0d276f","resourceVersion":"423","creationTimestamp":"2024-01-08T20:28:47Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"04a6ca48d9f71335316ab83231ec8e96","kubernetes.io/config.mirror":"04a6ca48d9f71335316ab83231ec8e96","kubernetes.io/config.seen":"2024-01-08T20:28:47.413849750Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:28:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0108 20:30:38.280604  103895 request.go:629] Waited for 197.25453ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:30:38.280688  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-209824
	I0108 20:30:38.280693  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:38.280700  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:38.280706  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:38.283898  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:38.283935  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:38.283948  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:38.283958  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:38.283964  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:38 GMT
	I0108 20:30:38.283973  103895 round_trippers.go:580]     Audit-Id: 74a30d6f-f4fd-4753-bcd2-a7d078d13304
	I0108 20:30:38.283981  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:38.283991  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:38.284157  103895 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"434","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:28:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0108 20:30:38.284601  103895 pod_ready.go:92] pod "kube-scheduler-multinode-209824" in "kube-system" namespace has status "Ready":"True"
	I0108 20:30:38.284621  103895 pod_ready.go:81] duration metric: took 401.492049ms waiting for pod "kube-scheduler-multinode-209824" in "kube-system" namespace to be "Ready" ...
	I0108 20:30:38.284633  103895 pod_ready.go:38] duration metric: took 1.200876725s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:30:38.284648  103895 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 20:30:38.284724  103895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:30:38.297743  103895 system_svc.go:56] duration metric: took 13.087506ms WaitForService to wait for kubelet.
	I0108 20:30:38.297779  103895 kubeadm.go:581] duration metric: took 46.737763114s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 20:30:38.297804  103895 node_conditions.go:102] verifying NodePressure condition ...
	I0108 20:30:38.480249  103895 request.go:629] Waited for 182.356586ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0108 20:30:38.480318  103895 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0108 20:30:38.480323  103895 round_trippers.go:469] Request Headers:
	I0108 20:30:38.480331  103895 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:30:38.480337  103895 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:30:38.483471  103895 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:30:38.483498  103895 round_trippers.go:577] Response Headers:
	I0108 20:30:38.483509  103895 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bfe714df-f7b4-4ee0-8ea7-87f922ba0e25
	I0108 20:30:38.483517  103895 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:30:38 GMT
	I0108 20:30:38.483535  103895 round_trippers.go:580]     Audit-Id: 736c3be0-cee0-4170-9612-f493c022a4c6
	I0108 20:30:38.483544  103895 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:30:38.483553  103895 round_trippers.go:580]     Content-Type: application/json
	I0108 20:30:38.483560  103895 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: a4c8c4f9-6248-49b6-9f50-abeb06105175
	I0108 20:30:38.483741  103895 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"567"},"items":[{"metadata":{"name":"multinode-209824","uid":"4a5fcca1-5002-4092-ba8d-150965fe7cef","resourceVersion":"434","creationTimestamp":"2024-01-08T20:28:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-209824","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-209824","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_28_48_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12840 chars]
	I0108 20:30:38.484224  103895 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 20:30:38.484238  103895 node_conditions.go:123] node cpu capacity is 8
	I0108 20:30:38.484249  103895 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0108 20:30:38.484252  103895 node_conditions.go:123] node cpu capacity is 8
	I0108 20:30:38.484256  103895 node_conditions.go:105] duration metric: took 186.447007ms to run NodePressure ...
	I0108 20:30:38.484267  103895 start.go:228] waiting for startup goroutines ...
	I0108 20:30:38.484301  103895 start.go:242] writing updated cluster config ...
	I0108 20:30:38.484608  103895 ssh_runner.go:195] Run: rm -f paused
	I0108 20:30:38.535485  103895 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 20:30:38.537368  103895 out.go:177] * Done! kubectl is now configured to use "multinode-209824" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 08 20:29:32 multinode-209824 crio[961]: time="2024-01-08 20:29:32.632653014Z" level=info msg="Created container 1486994bd2a0e6353f18840ce4e55ebd17b79c991d494e30b12a65ba79d01429: kube-system/coredns-5dd5756b68-ds62v/coredns" id=a8939498-1b5b-4982-899e-a44cc0b3b8aa name=/runtime.v1.RuntimeService/CreateContainer
	Jan 08 20:29:32 multinode-209824 crio[961]: time="2024-01-08 20:29:32.632816390Z" level=info msg="Starting container: 20d32f396a9a3da93d7133184c7b0a47b9fbb065384b082c1becc95169a4ae8c" id=7229ca5d-eaff-4c13-aed5-68f85a23e9c4 name=/runtime.v1.RuntimeService/StartContainer
	Jan 08 20:29:32 multinode-209824 crio[961]: time="2024-01-08 20:29:32.633090600Z" level=info msg="Starting container: 1486994bd2a0e6353f18840ce4e55ebd17b79c991d494e30b12a65ba79d01429" id=0d8e0b65-5a72-4ec8-b94f-69ee71fb7257 name=/runtime.v1.RuntimeService/StartContainer
	Jan 08 20:29:32 multinode-209824 crio[961]: time="2024-01-08 20:29:32.640615070Z" level=info msg="Started container" PID=2336 containerID=20d32f396a9a3da93d7133184c7b0a47b9fbb065384b082c1becc95169a4ae8c description=kube-system/storage-provisioner/storage-provisioner id=7229ca5d-eaff-4c13-aed5-68f85a23e9c4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1bec4a2148e7bd44c3612071866ada1930d504513a038cb67b04cc1a735c86f4
	Jan 08 20:29:32 multinode-209824 crio[961]: time="2024-01-08 20:29:32.640790648Z" level=info msg="Started container" PID=2338 containerID=1486994bd2a0e6353f18840ce4e55ebd17b79c991d494e30b12a65ba79d01429 description=kube-system/coredns-5dd5756b68-ds62v/coredns id=0d8e0b65-5a72-4ec8-b94f-69ee71fb7257 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b79ca36b609f6f446f34c85911ff5127a300ec98da2a4d50fcc803c281e7fe80
	Jan 08 20:30:39 multinode-209824 crio[961]: time="2024-01-08 20:30:39.617181920Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-6c6nv/POD" id=0b23c441-5ab0-42b9-9c6a-aa00dbace1dc name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 08 20:30:39 multinode-209824 crio[961]: time="2024-01-08 20:30:39.617268686Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 08 20:30:39 multinode-209824 crio[961]: time="2024-01-08 20:30:39.635714723Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-6c6nv Namespace:default ID:65d3f5b78d196ba226f45be27c52d2314463f270d19113659d06ad51ceef28b1 UID:974297fe-cba5-4b08-becc-894da91ee771 NetNS:/var/run/netns/239fb251-4988-4505-8395-709d3f22056f Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 08 20:30:39 multinode-209824 crio[961]: time="2024-01-08 20:30:39.635759373Z" level=info msg="Adding pod default_busybox-5bc68d56bd-6c6nv to CNI network \"kindnet\" (type=ptp)"
	Jan 08 20:30:39 multinode-209824 crio[961]: time="2024-01-08 20:30:39.645246104Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-6c6nv Namespace:default ID:65d3f5b78d196ba226f45be27c52d2314463f270d19113659d06ad51ceef28b1 UID:974297fe-cba5-4b08-becc-894da91ee771 NetNS:/var/run/netns/239fb251-4988-4505-8395-709d3f22056f Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 08 20:30:39 multinode-209824 crio[961]: time="2024-01-08 20:30:39.645420053Z" level=info msg="Checking pod default_busybox-5bc68d56bd-6c6nv for CNI network kindnet (type=ptp)"
	Jan 08 20:30:39 multinode-209824 crio[961]: time="2024-01-08 20:30:39.671602014Z" level=info msg="Ran pod sandbox 65d3f5b78d196ba226f45be27c52d2314463f270d19113659d06ad51ceef28b1 with infra container: default/busybox-5bc68d56bd-6c6nv/POD" id=0b23c441-5ab0-42b9-9c6a-aa00dbace1dc name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 08 20:30:39 multinode-209824 crio[961]: time="2024-01-08 20:30:39.672678990Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=1b749679-e9e0-409e-8192-489ee57883de name=/runtime.v1.ImageService/ImageStatus
	Jan 08 20:30:39 multinode-209824 crio[961]: time="2024-01-08 20:30:39.672869871Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=1b749679-e9e0-409e-8192-489ee57883de name=/runtime.v1.ImageService/ImageStatus
	Jan 08 20:30:39 multinode-209824 crio[961]: time="2024-01-08 20:30:39.673616871Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=0ece9108-8ae7-40dd-bd91-aec5225125b5 name=/runtime.v1.ImageService/PullImage
	Jan 08 20:30:39 multinode-209824 crio[961]: time="2024-01-08 20:30:39.678417909Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 08 20:30:39 multinode-209824 crio[961]: time="2024-01-08 20:30:39.829375761Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 08 20:30:40 multinode-209824 crio[961]: time="2024-01-08 20:30:40.261255986Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=0ece9108-8ae7-40dd-bd91-aec5225125b5 name=/runtime.v1.ImageService/PullImage
	Jan 08 20:30:40 multinode-209824 crio[961]: time="2024-01-08 20:30:40.262815383Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=f99fd440-e1ad-43d4-be3d-4e63ac8a20f5 name=/runtime.v1.ImageService/ImageStatus
	Jan 08 20:30:40 multinode-209824 crio[961]: time="2024-01-08 20:30:40.264107236Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f99fd440-e1ad-43d4-be3d-4e63ac8a20f5 name=/runtime.v1.ImageService/ImageStatus
	Jan 08 20:30:40 multinode-209824 crio[961]: time="2024-01-08 20:30:40.265310137Z" level=info msg="Creating container: default/busybox-5bc68d56bd-6c6nv/busybox" id=dc7a1dc1-6bb3-44b0-9169-d58e88de9c12 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 08 20:30:40 multinode-209824 crio[961]: time="2024-01-08 20:30:40.265518998Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 08 20:30:40 multinode-209824 crio[961]: time="2024-01-08 20:30:40.350853847Z" level=info msg="Created container cf114bbf23904d8ed03ae39705e4804f38503f4af7a78f3e8ea83955b242c0d7: default/busybox-5bc68d56bd-6c6nv/busybox" id=dc7a1dc1-6bb3-44b0-9169-d58e88de9c12 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 08 20:30:40 multinode-209824 crio[961]: time="2024-01-08 20:30:40.352246194Z" level=info msg="Starting container: cf114bbf23904d8ed03ae39705e4804f38503f4af7a78f3e8ea83955b242c0d7" id=724655f2-644d-41c1-af19-394cfc06b64c name=/runtime.v1.RuntimeService/StartContainer
	Jan 08 20:30:40 multinode-209824 crio[961]: time="2024-01-08 20:30:40.360112457Z" level=info msg="Started container" PID=2528 containerID=cf114bbf23904d8ed03ae39705e4804f38503f4af7a78f3e8ea83955b242c0d7 description=default/busybox-5bc68d56bd-6c6nv/busybox id=724655f2-644d-41c1-af19-394cfc06b64c name=/runtime.v1.RuntimeService/StartContainer sandboxID=65d3f5b78d196ba226f45be27c52d2314463f270d19113659d06ad51ceef28b1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	cf114bbf23904       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   5 seconds ago        Running             busybox                   0                   65d3f5b78d196       busybox-5bc68d56bd-6c6nv
	1486994bd2a0e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      About a minute ago   Running             coredns                   0                   b79ca36b609f6       coredns-5dd5756b68-ds62v
	20d32f396a9a3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       0                   1bec4a2148e7b       storage-provisioner
	6ea51e69a2ec1       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      About a minute ago   Running             kube-proxy                0                   c99698fd3d343       kube-proxy-s267w
	739cdd84ecdcc       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      About a minute ago   Running             kindnet-cni               0                   3e75447cd999b       kindnet-k59d5
	a7e53f549ced6       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      2 minutes ago        Running             kube-controller-manager   0                   66e2f70e84be2       kube-controller-manager-multinode-209824
	c7498d6c7111e       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      2 minutes ago        Running             etcd                      0                   63f971e436a94       etcd-multinode-209824
	b595f7d45acc1       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      2 minutes ago        Running             kube-scheduler            0                   905aecb571dcf       kube-scheduler-multinode-209824
	9beafc11f6d36       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      2 minutes ago        Running             kube-apiserver            0                   e79c386013994       kube-apiserver-multinode-209824
	
	
	==> coredns [1486994bd2a0e6353f18840ce4e55ebd17b79c991d494e30b12a65ba79d01429] <==
	[INFO] 10.244.0.3:49943 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080526s
	[INFO] 10.244.1.2:44348 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144816s
	[INFO] 10.244.1.2:40816 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00205673s
	[INFO] 10.244.1.2:48081 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109837s
	[INFO] 10.244.1.2:57984 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086052s
	[INFO] 10.244.1.2:38473 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001400205s
	[INFO] 10.244.1.2:47899 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081755s
	[INFO] 10.244.1.2:58413 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069062s
	[INFO] 10.244.1.2:38385 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058563s
	[INFO] 10.244.0.3:59579 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164183s
	[INFO] 10.244.0.3:33341 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081249s
	[INFO] 10.244.0.3:38442 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000044979s
	[INFO] 10.244.0.3:58545 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059226s
	[INFO] 10.244.1.2:49887 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129584s
	[INFO] 10.244.1.2:34980 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008872s
	[INFO] 10.244.1.2:51375 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061876s
	[INFO] 10.244.1.2:45783 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056464s
	[INFO] 10.244.0.3:58344 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149392s
	[INFO] 10.244.0.3:32888 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000119276s
	[INFO] 10.244.0.3:37876 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000095792s
	[INFO] 10.244.0.3:33164 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069685s
	[INFO] 10.244.1.2:34746 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192435s
	[INFO] 10.244.1.2:55695 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000155859s
	[INFO] 10.244.1.2:35412 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084103s
	[INFO] 10.244.1.2:34415 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000148031s
	
	
	==> describe nodes <==
	Name:               multinode-209824
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-209824
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=multinode-209824
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T20_28_48_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 20:28:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-209824
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 20:30:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 20:29:32 +0000   Mon, 08 Jan 2024 20:28:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 20:29:32 +0000   Mon, 08 Jan 2024 20:28:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 20:29:32 +0000   Mon, 08 Jan 2024 20:28:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 20:29:32 +0000   Mon, 08 Jan 2024 20:29:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-209824
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9030069a8424d61ac8185fd06a17834
	  System UUID:                0f55ee71-cdea-4585-86ab-c6b7cd4263e6
	  Boot ID:                    0e88edaa-666a-4348-8c8d-059e8a9aec1e
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-6c6nv                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 coredns-5dd5756b68-ds62v                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     105s
	  kube-system                 etcd-multinode-209824                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         119s
	  kube-system                 kindnet-k59d5                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      105s
	  kube-system                 kube-apiserver-multinode-209824             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-controller-manager-multinode-209824    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-proxy-s267w                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-scheduler-multinode-209824             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  Starting                 2m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet          Node multinode-209824 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m4s)  kubelet          Node multinode-209824 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x8 over 2m4s)  kubelet          Node multinode-209824 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s                 kubelet          Node multinode-209824 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s                 kubelet          Node multinode-209824 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s                 kubelet          Node multinode-209824 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           105s                 node-controller  Node multinode-209824 event: Registered Node multinode-209824 in Controller
	  Normal  NodeReady                73s                  kubelet          Node multinode-209824 status is now: NodeReady
	
	
	Name:               multinode-209824-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-209824-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=multinode-209824
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_08T20_29_51_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 20:29:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-209824-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 20:30:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 20:30:37 +0000   Mon, 08 Jan 2024 20:29:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 20:30:37 +0000   Mon, 08 Jan 2024 20:29:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 20:30:37 +0000   Mon, 08 Jan 2024 20:29:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 20:30:37 +0000   Mon, 08 Jan 2024 20:30:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-209824-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 3fa54a58477d4a9bb4ff175d84b75908
	  System UUID:                279aaf90-a267-449b-9ac4-7b4cba267565
	  Boot ID:                    0e88edaa-666a-4348-8c8d-059e8a9aec1e
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-v8fbl    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kindnet-8796v               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      55s
	  kube-system                 kube-proxy-7gtj2            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 39s                kube-proxy       
	  Normal  NodeHasSufficientMemory  55s (x5 over 56s)  kubelet          Node multinode-209824-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x5 over 56s)  kubelet          Node multinode-209824-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x5 over 56s)  kubelet          Node multinode-209824-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                node-controller  Node multinode-209824-m02 event: Registered Node multinode-209824-m02 in Controller
	  Normal  NodeReady                8s                 kubelet          Node multinode-209824-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.004992] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006676] FS-Cache: N-cookie d=00000000a15bc294{9p.inode} n=000000005063de54
	[  +0.008878] FS-Cache: N-key=[8] '99a00f0200000000'
	[  +3.062042] FS-Cache: Duplicate cookie detected
	[  +0.004771] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006803] FS-Cache: O-cookie d=0000000075a83ff0{9P.session} n=00000000d5056a16
	[  +0.007554] FS-Cache: O-key=[10] '34323935383035373738'
	[  +0.005406] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006649] FS-Cache: N-cookie d=0000000075a83ff0{9P.session} n=00000000a5112cf2
	[  +0.008936] FS-Cache: N-key=[10] '34323935383035373738'
	[  +6.625953] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan 8 20:20] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 d4 04 8a 97 99 16 d1 1a 51 ed b4 08 00
	[  +1.026420] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 72 d4 04 8a 97 99 16 d1 1a 51 ed b4 08 00
	[  +2.015866] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000042] ll header: 00000000: 72 d4 04 8a 97 99 16 d1 1a 51 ed b4 08 00
	[Jan 8 20:21] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 72 d4 04 8a 97 99 16 d1 1a 51 ed b4 08 00
	[  +8.191099] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000012] ll header: 00000000: 72 d4 04 8a 97 99 16 d1 1a 51 ed b4 08 00
	[ +16.130308] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 72 d4 04 8a 97 99 16 d1 1a 51 ed b4 08 00
	[Jan 8 20:22] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 72 d4 04 8a 97 99 16 d1 1a 51 ed b4 08 00
	
	
	==> etcd [c7498d6c7111e868f08d416aeb5a0f1c825a04eed2aaabc43ae15d3a85fb1c34] <==
	{"level":"info","ts":"2024-01-08T20:28:41.819795Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T20:28:41.819811Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T20:28:41.821182Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-08T20:28:41.821311Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2024-01-08T20:28:41.82135Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-08T20:28:41.821374Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2024-01-08T20:28:41.821485Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-08T20:28:42.207968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-08T20:28:42.208042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-08T20:28:42.208094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2024-01-08T20:28:42.208115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2024-01-08T20:28:42.208124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-08T20:28:42.208138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2024-01-08T20:28:42.20815Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-08T20:28:42.209295Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:28:42.209991Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-209824 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T20:28:42.209995Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T20:28:42.210023Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T20:28:42.210238Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T20:28:42.210258Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T20:28:42.210436Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:28:42.210598Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:28:42.21067Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:28:42.211575Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T20:28:42.211608Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	
	
	==> kernel <==
	 20:30:45 up  1:12,  0 users,  load average: 0.50, 1.20, 0.98
	Linux multinode-209824 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [739cdd84ecdcc03b6bb5b978131a7b6b3bd8d1b2cb1f547a7b4d80a118769512] <==
	I0108 20:29:51.768610       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0108 20:29:51.768643       1 main.go:227] handling current node
	I0108 20:29:51.768653       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0108 20:29:51.768658       1 main.go:250] Node multinode-209824-m02 has CIDR [10.244.1.0/24] 
	I0108 20:29:51.768870       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0108 20:30:01.779128       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0108 20:30:01.779156       1 main.go:227] handling current node
	I0108 20:30:01.779166       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0108 20:30:01.779170       1 main.go:250] Node multinode-209824-m02 has CIDR [10.244.1.0/24] 
	I0108 20:30:11.792161       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0108 20:30:11.792205       1 main.go:227] handling current node
	I0108 20:30:11.792219       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0108 20:30:11.792225       1 main.go:250] Node multinode-209824-m02 has CIDR [10.244.1.0/24] 
	I0108 20:30:21.796286       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0108 20:30:21.796312       1 main.go:227] handling current node
	I0108 20:30:21.796322       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0108 20:30:21.796327       1 main.go:250] Node multinode-209824-m02 has CIDR [10.244.1.0/24] 
	I0108 20:30:31.800384       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0108 20:30:31.800412       1 main.go:227] handling current node
	I0108 20:30:31.800423       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0108 20:30:31.800428       1 main.go:250] Node multinode-209824-m02 has CIDR [10.244.1.0/24] 
	I0108 20:30:41.813796       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0108 20:30:41.813828       1 main.go:227] handling current node
	I0108 20:30:41.813842       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0108 20:30:41.813850       1 main.go:250] Node multinode-209824-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [9beafc11f6d367522f765aafaea43646c3fe10722c3d6e75377010b5149a1019] <==
	I0108 20:28:44.332872       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0108 20:28:44.345901       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 20:28:44.355011       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0108 20:28:44.355081       1 aggregator.go:166] initial CRD sync complete...
	I0108 20:28:44.355079       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0108 20:28:44.355092       1 autoregister_controller.go:141] Starting autoregister controller
	I0108 20:28:44.355100       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0108 20:28:44.355105       1 shared_informer.go:318] Caches are synced for configmaps
	I0108 20:28:44.355101       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0108 20:28:44.355152       1 cache.go:39] Caches are synced for autoregister controller
	I0108 20:28:45.161343       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0108 20:28:45.165346       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0108 20:28:45.165367       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 20:28:45.680730       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 20:28:45.730350       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 20:28:45.798021       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0108 20:28:45.806062       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0108 20:28:45.807446       1 controller.go:624] quota admission added evaluator for: endpoints
	I0108 20:28:45.813114       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 20:28:46.309795       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0108 20:28:47.321822       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0108 20:28:47.334225       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0108 20:28:47.347399       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0108 20:29:00.308293       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0108 20:29:00.408918       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [a7e53f549ced61ae54f100512ad154bd25de3aecf3bd7212778d6e21911617e8] <==
	I0108 20:29:32.224329       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="133.823µs"
	I0108 20:29:33.707242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="161.527µs"
	I0108 20:29:33.729942       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.865222ms"
	I0108 20:29:33.730097       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.696µs"
	I0108 20:29:35.351852       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0108 20:29:50.713065       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-209824-m02\" does not exist"
	I0108 20:29:50.724116       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-8796v"
	I0108 20:29:50.725051       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7gtj2"
	I0108 20:29:50.728025       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-209824-m02" podCIDRs=["10.244.1.0/24"]
	I0108 20:29:55.355713       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-209824-m02"
	I0108 20:29:55.355710       1 event.go:307] "Event occurred" object="multinode-209824-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-209824-m02 event: Registered Node multinode-209824-m02 in Controller"
	I0108 20:30:37.045363       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-209824-m02"
	I0108 20:30:39.295122       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0108 20:30:39.301557       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-v8fbl"
	I0108 20:30:39.310502       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-6c6nv"
	I0108 20:30:39.316871       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.160314ms"
	I0108 20:30:39.323135       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.073157ms"
	I0108 20:30:39.325862       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="2.221953ms"
	I0108 20:30:39.330195       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="159.968µs"
	I0108 20:30:39.333468       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="156.358µs"
	I0108 20:30:40.373484       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-v8fbl" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-v8fbl"
	I0108 20:30:40.843958       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.71323ms"
	I0108 20:30:40.844075       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="71.652µs"
	I0108 20:30:41.329328       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.78446ms"
	I0108 20:30:41.329440       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="60.668µs"
	
	
	==> kube-proxy [6ea51e69a2ec1b6881b834ebccd3d83a4803c73d0636168b300f0db79a985821] <==
	I0108 20:29:01.515306       1 server_others.go:69] "Using iptables proxy"
	I0108 20:29:01.597181       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0108 20:29:01.625877       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0108 20:29:01.689973       1 server_others.go:152] "Using iptables Proxier"
	I0108 20:29:01.690027       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0108 20:29:01.690039       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0108 20:29:01.690077       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 20:29:01.690424       1 server.go:846] "Version info" version="v1.28.4"
	I0108 20:29:01.690443       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 20:29:01.691398       1 config.go:97] "Starting endpoint slice config controller"
	I0108 20:29:01.691444       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 20:29:01.691496       1 config.go:188] "Starting service config controller"
	I0108 20:29:01.691587       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 20:29:01.694185       1 config.go:315] "Starting node config controller"
	I0108 20:29:01.694302       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 20:29:01.792227       1 shared_informer.go:318] Caches are synced for service config
	I0108 20:29:01.792252       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 20:29:01.794385       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [b595f7d45acc1a3c975dc98e3f9c77aa7efa2814038dfa6b475b4a57be91a49b] <==
	E0108 20:28:44.305960       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 20:28:44.305977       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 20:28:44.305540       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 20:28:44.305990       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 20:28:44.305666       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 20:28:44.306003       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 20:28:44.304842       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 20:28:44.306008       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 20:28:44.306018       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 20:28:44.306026       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 20:28:45.177603       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 20:28:45.177640       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 20:28:45.257950       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 20:28:45.257993       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0108 20:28:45.288605       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 20:28:45.288643       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 20:28:45.304903       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 20:28:45.304952       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 20:28:45.466008       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 20:28:45.466060       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 20:28:45.512021       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 20:28:45.512052       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 20:28:45.723588       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 20:28:45.723639       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0108 20:28:48.301816       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 08 20:29:00 multinode-209824 kubelet[1587]: I0108 20:29:00.489844    1587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/cc861346-590e-440e-b826-f9a35f006571-cni-cfg\") pod \"kindnet-k59d5\" (UID: \"cc861346-590e-440e-b826-f9a35f006571\") " pod="kube-system/kindnet-k59d5"
	Jan 08 20:29:00 multinode-209824 kubelet[1587]: I0108 20:29:00.489871    1587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/825c87c7-7b31-44a0-9009-1603f045b6a8-xtables-lock\") pod \"kube-proxy-s267w\" (UID: \"825c87c7-7b31-44a0-9009-1603f045b6a8\") " pod="kube-system/kube-proxy-s267w"
	Jan 08 20:29:00 multinode-209824 kubelet[1587]: I0108 20:29:00.489898    1587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc861346-590e-440e-b826-f9a35f006571-xtables-lock\") pod \"kindnet-k59d5\" (UID: \"cc861346-590e-440e-b826-f9a35f006571\") " pod="kube-system/kindnet-k59d5"
	Jan 08 20:29:00 multinode-209824 kubelet[1587]: I0108 20:29:00.489924    1587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc861346-590e-440e-b826-f9a35f006571-lib-modules\") pod \"kindnet-k59d5\" (UID: \"cc861346-590e-440e-b826-f9a35f006571\") " pod="kube-system/kindnet-k59d5"
	Jan 08 20:29:00 multinode-209824 kubelet[1587]: I0108 20:29:00.489947    1587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/825c87c7-7b31-44a0-9009-1603f045b6a8-kube-proxy\") pod \"kube-proxy-s267w\" (UID: \"825c87c7-7b31-44a0-9009-1603f045b6a8\") " pod="kube-system/kube-proxy-s267w"
	Jan 08 20:29:00 multinode-209824 kubelet[1587]: I0108 20:29:00.489988    1587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/825c87c7-7b31-44a0-9009-1603f045b6a8-lib-modules\") pod \"kube-proxy-s267w\" (UID: \"825c87c7-7b31-44a0-9009-1603f045b6a8\") " pod="kube-system/kube-proxy-s267w"
	Jan 08 20:29:00 multinode-209824 kubelet[1587]: W0108 20:29:00.824824    1587 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8507f1719a09c808280058bb0847a0bae4d1da9b371ca43d4d04d28ab47955c8/crio-3e75447cd999bbefacf1f6093007ce84036e906df7e88f334be062df7027440f WatchSource:0}: Error finding container 3e75447cd999bbefacf1f6093007ce84036e906df7e88f334be062df7027440f: Status 404 returned error can't find the container with id 3e75447cd999bbefacf1f6093007ce84036e906df7e88f334be062df7027440f
	Jan 08 20:29:00 multinode-209824 kubelet[1587]: W0108 20:29:00.825190    1587 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8507f1719a09c808280058bb0847a0bae4d1da9b371ca43d4d04d28ab47955c8/crio-c99698fd3d343bebe7d4e954b5da32d3a21f60ca347dea68778e8add8456c5a7 WatchSource:0}: Error finding container c99698fd3d343bebe7d4e954b5da32d3a21f60ca347dea68778e8add8456c5a7: Status 404 returned error can't find the container with id c99698fd3d343bebe7d4e954b5da32d3a21f60ca347dea68778e8add8456c5a7
	Jan 08 20:29:01 multinode-209824 kubelet[1587]: I0108 20:29:01.619694    1587 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-k59d5" podStartSLOduration=1.6196281940000001 podCreationTimestamp="2024-01-08 20:29:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 20:29:01.606468863 +0000 UTC m=+14.318351424" watchObservedRunningTime="2024-01-08 20:29:01.619628194 +0000 UTC m=+14.331510795"
	Jan 08 20:29:07 multinode-209824 kubelet[1587]: I0108 20:29:07.516475    1587 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-s267w" podStartSLOduration=7.516412397 podCreationTimestamp="2024-01-08 20:29:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 20:29:01.625871519 +0000 UTC m=+14.337754076" watchObservedRunningTime="2024-01-08 20:29:07.516412397 +0000 UTC m=+20.228294954"
	Jan 08 20:29:32 multinode-209824 kubelet[1587]: I0108 20:29:32.177044    1587 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 08 20:29:32 multinode-209824 kubelet[1587]: I0108 20:29:32.202763    1587 topology_manager.go:215] "Topology Admit Handler" podUID="926641e0-32b5-4e31-8361-c677061ec067" podNamespace="kube-system" podName="coredns-5dd5756b68-ds62v"
	Jan 08 20:29:32 multinode-209824 kubelet[1587]: I0108 20:29:32.203053    1587 topology_manager.go:215] "Topology Admit Handler" podUID="64668c85-5cc7-4433-afea-3398724f09d1" podNamespace="kube-system" podName="storage-provisioner"
	Jan 08 20:29:32 multinode-209824 kubelet[1587]: I0108 20:29:32.308316    1587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plmvq\" (UniqueName: \"kubernetes.io/projected/64668c85-5cc7-4433-afea-3398724f09d1-kube-api-access-plmvq\") pod \"storage-provisioner\" (UID: \"64668c85-5cc7-4433-afea-3398724f09d1\") " pod="kube-system/storage-provisioner"
	Jan 08 20:29:32 multinode-209824 kubelet[1587]: I0108 20:29:32.308389    1587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/926641e0-32b5-4e31-8361-c677061ec067-config-volume\") pod \"coredns-5dd5756b68-ds62v\" (UID: \"926641e0-32b5-4e31-8361-c677061ec067\") " pod="kube-system/coredns-5dd5756b68-ds62v"
	Jan 08 20:29:32 multinode-209824 kubelet[1587]: I0108 20:29:32.308418    1587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/64668c85-5cc7-4433-afea-3398724f09d1-tmp\") pod \"storage-provisioner\" (UID: \"64668c85-5cc7-4433-afea-3398724f09d1\") " pod="kube-system/storage-provisioner"
	Jan 08 20:29:32 multinode-209824 kubelet[1587]: I0108 20:29:32.308511    1587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qgtf\" (UniqueName: \"kubernetes.io/projected/926641e0-32b5-4e31-8361-c677061ec067-kube-api-access-7qgtf\") pod \"coredns-5dd5756b68-ds62v\" (UID: \"926641e0-32b5-4e31-8361-c677061ec067\") " pod="kube-system/coredns-5dd5756b68-ds62v"
	Jan 08 20:29:32 multinode-209824 kubelet[1587]: W0108 20:29:32.560448    1587 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8507f1719a09c808280058bb0847a0bae4d1da9b371ca43d4d04d28ab47955c8/crio-1bec4a2148e7bd44c3612071866ada1930d504513a038cb67b04cc1a735c86f4 WatchSource:0}: Error finding container 1bec4a2148e7bd44c3612071866ada1930d504513a038cb67b04cc1a735c86f4: Status 404 returned error can't find the container with id 1bec4a2148e7bd44c3612071866ada1930d504513a038cb67b04cc1a735c86f4
	Jan 08 20:29:32 multinode-209824 kubelet[1587]: W0108 20:29:32.560772    1587 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8507f1719a09c808280058bb0847a0bae4d1da9b371ca43d4d04d28ab47955c8/crio-b79ca36b609f6f446f34c85911ff5127a300ec98da2a4d50fcc803c281e7fe80 WatchSource:0}: Error finding container b79ca36b609f6f446f34c85911ff5127a300ec98da2a4d50fcc803c281e7fe80: Status 404 returned error can't find the container with id b79ca36b609f6f446f34c85911ff5127a300ec98da2a4d50fcc803c281e7fe80
	Jan 08 20:29:33 multinode-209824 kubelet[1587]: I0108 20:29:33.706466    1587 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-ds62v" podStartSLOduration=33.70641409 podCreationTimestamp="2024-01-08 20:29:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 20:29:33.706179763 +0000 UTC m=+46.418062321" watchObservedRunningTime="2024-01-08 20:29:33.70641409 +0000 UTC m=+46.418296648"
	Jan 08 20:29:33 multinode-209824 kubelet[1587]: I0108 20:29:33.706595    1587 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=32.706571397 podCreationTimestamp="2024-01-08 20:29:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 20:29:32.704182608 +0000 UTC m=+45.416065167" watchObservedRunningTime="2024-01-08 20:29:33.706571397 +0000 UTC m=+46.418453955"
	Jan 08 20:30:39 multinode-209824 kubelet[1587]: I0108 20:30:39.315304    1587 topology_manager.go:215] "Topology Admit Handler" podUID="974297fe-cba5-4b08-becc-894da91ee771" podNamespace="default" podName="busybox-5bc68d56bd-6c6nv"
	Jan 08 20:30:39 multinode-209824 kubelet[1587]: I0108 20:30:39.419592    1587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdp7s\" (UniqueName: \"kubernetes.io/projected/974297fe-cba5-4b08-becc-894da91ee771-kube-api-access-jdp7s\") pod \"busybox-5bc68d56bd-6c6nv\" (UID: \"974297fe-cba5-4b08-becc-894da91ee771\") " pod="default/busybox-5bc68d56bd-6c6nv"
	Jan 08 20:30:39 multinode-209824 kubelet[1587]: W0108 20:30:39.668649    1587 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8507f1719a09c808280058bb0847a0bae4d1da9b371ca43d4d04d28ab47955c8/crio-65d3f5b78d196ba226f45be27c52d2314463f270d19113659d06ad51ceef28b1 WatchSource:0}: Error finding container 65d3f5b78d196ba226f45be27c52d2314463f270d19113659d06ad51ceef28b1: Status 404 returned error can't find the container with id 65d3f5b78d196ba226f45be27c52d2314463f270d19113659d06ad51ceef28b1
	Jan 08 20:30:40 multinode-209824 kubelet[1587]: I0108 20:30:40.839676    1587 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-6c6nv" podStartSLOduration=1.250566705 podCreationTimestamp="2024-01-08 20:30:39 +0000 UTC" firstStartedPulling="2024-01-08 20:30:39.673050035 +0000 UTC m=+112.384932586" lastFinishedPulling="2024-01-08 20:30:40.262082615 +0000 UTC m=+112.973965155" observedRunningTime="2024-01-08 20:30:40.839023057 +0000 UTC m=+113.550905616" watchObservedRunningTime="2024-01-08 20:30:40.839599274 +0000 UTC m=+113.551481835"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-209824 -n multinode-209824
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-209824 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.12s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (73.15s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.9.0.1057266490.exe start -p running-upgrade-838183 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.9.0.1057266490.exe start -p running-upgrade-838183 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m6.84340625s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-838183 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-838183 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (3.520081905s)

                                                
                                                
-- stdout --
	* [running-upgrade-838183] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-11003/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-11003/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-838183 in cluster running-upgrade-838183
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Updating the running docker "running-upgrade-838183" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:42:17.182255  178290 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:42:17.182559  178290 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:42:17.182569  178290 out.go:309] Setting ErrFile to fd 2...
	I0108 20:42:17.182576  178290 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:42:17.182805  178290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-11003/.minikube/bin
	I0108 20:42:17.183713  178290 out.go:303] Setting JSON to false
	I0108 20:42:17.186384  178290 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5063,"bootTime":1704741474,"procs":673,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:42:17.186543  178290 start.go:138] virtualization: kvm guest
	I0108 20:42:17.189426  178290 out.go:177] * [running-upgrade-838183] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:42:17.191478  178290 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:42:17.191526  178290 notify.go:220] Checking for updates...
	I0108 20:42:17.195003  178290 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:42:17.196654  178290 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-11003/kubeconfig
	I0108 20:42:17.198386  178290 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-11003/.minikube
	I0108 20:42:17.200080  178290 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 20:42:17.201606  178290 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:42:17.203605  178290 config.go:182] Loaded profile config "running-upgrade-838183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0108 20:42:17.203635  178290 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0108 20:42:17.205683  178290 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0108 20:42:17.207043  178290 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:42:17.237470  178290 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:42:17.237686  178290 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:42:17.310012  178290 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:72 SystemTime:2024-01-08 20:42:17.295704471 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:42:17.310210  178290 docker.go:295] overlay module found
	I0108 20:42:17.312054  178290 out.go:177] * Using the docker driver based on existing profile
	I0108 20:42:17.313702  178290 start.go:298] selected driver: docker
	I0108 20:42:17.313727  178290 start.go:902] validating driver "docker" against &{Name:running-upgrade-838183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-838183 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.4 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0108 20:42:17.313860  178290 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:42:17.315225  178290 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:42:17.381833  178290 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:72 SystemTime:2024-01-08 20:42:17.36961983 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:42:17.382449  178290 cni.go:84] Creating CNI manager for ""
	I0108 20:42:17.382483  178290 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0108 20:42:17.382500  178290 start_flags.go:323] config:
	{Name:running-upgrade-838183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-838183 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.4 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I0108 20:42:17.385261  178290 out.go:177] * Starting control plane node running-upgrade-838183 in cluster running-upgrade-838183
	I0108 20:42:17.387126  178290 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 20:42:17.389192  178290 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:42:17.391197  178290 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0108 20:42:17.391284  178290 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0108 20:42:17.416688  178290 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0108 20:42:17.416735  178290 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	W0108 20:42:17.433285  178290 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0108 20:42:17.433435  178290 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/running-upgrade-838183/config.json ...
	I0108 20:42:17.433468  178290 cache.go:107] acquiring lock: {Name:mkbdc7816af42985ec3faea6eb369b3f79d337bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:42:17.433502  178290 cache.go:107] acquiring lock: {Name:mkfe56eabb4845c24541670fe01dff353f2a9610 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:42:17.433483  178290 cache.go:107] acquiring lock: {Name:mkef70d9482c6b8293bc341d6d74740871f0b346 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:42:17.433581  178290 cache.go:115] /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0108 20:42:17.433572  178290 cache.go:107] acquiring lock: {Name:mkfeda513bc66c61e081cda1f34dce4df6583868 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:42:17.433595  178290 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 98.399µs
	I0108 20:42:17.433608  178290 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0108 20:42:17.433584  178290 cache.go:107] acquiring lock: {Name:mkb0599c97382983041e723f866c7f01c300e9c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:42:17.433612  178290 cache.go:115] /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0108 20:42:17.433627  178290 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 176.01µs
	I0108 20:42:17.433633  178290 cache.go:115] /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0108 20:42:17.433645  178290 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0108 20:42:17.433608  178290 cache.go:115] /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0108 20:42:17.433649  178290 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 77.607µs
	I0108 20:42:17.433659  178290 cache.go:115] /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0108 20:42:17.433662  178290 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 185.426µs
	I0108 20:42:17.433671  178290 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0108 20:42:17.433671  178290 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 94.277µs
	I0108 20:42:17.433692  178290 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0108 20:42:17.433661  178290 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0108 20:42:17.433528  178290 cache.go:107] acquiring lock: {Name:mkb65e6efbc3de1903cfd8168e6651e2050d10ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:42:17.433703  178290 cache.go:107] acquiring lock: {Name:mkd75ce0d4fa468fb7a21624f0ca65e9f5547ae5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:42:17.433755  178290 cache.go:194] Successfully downloaded all kic artifacts
	I0108 20:42:17.433752  178290 cache.go:107] acquiring lock: {Name:mk6db05523f201826cea55b8cf29218150138037 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:42:17.433792  178290 start.go:365] acquiring machines lock for running-upgrade-838183: {Name:mk5b785718236d132ffb55878a90839da57490a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:42:17.433839  178290 cache.go:115] /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0108 20:42:17.433738  178290 cache.go:115] /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0108 20:42:17.433857  178290 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 248.284µs
	I0108 20:42:17.433878  178290 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0108 20:42:17.433881  178290 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 347.457µs
	I0108 20:42:17.433891  178290 start.go:369] acquired machines lock for "running-upgrade-838183" in 83.412µs
	I0108 20:42:17.433898  178290 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0108 20:42:17.433915  178290 start.go:96] Skipping create...Using existing machine configuration
	I0108 20:42:17.433927  178290 fix.go:54] fixHost starting: m01
	I0108 20:42:17.433894  178290 cache.go:115] /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0108 20:42:17.434003  178290 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 278.762µs
	I0108 20:42:17.434023  178290 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0108 20:42:17.434038  178290 cache.go:87] Successfully saved all images to host disk.
	I0108 20:42:17.434224  178290 cli_runner.go:164] Run: docker container inspect running-upgrade-838183 --format={{.State.Status}}
	I0108 20:42:17.460427  178290 fix.go:102] recreateIfNeeded on running-upgrade-838183: state=Running err=<nil>
	W0108 20:42:17.460458  178290 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 20:42:17.462363  178290 out.go:177] * Updating the running docker "running-upgrade-838183" container ...
	I0108 20:42:17.463752  178290 machine.go:88] provisioning docker machine ...
	I0108 20:42:17.463779  178290 ubuntu.go:169] provisioning hostname "running-upgrade-838183"
	I0108 20:42:17.463840  178290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-838183
	I0108 20:42:17.485164  178290 main.go:141] libmachine: Using SSH client type: native
	I0108 20:42:17.485779  178290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32931 <nil> <nil>}
	I0108 20:42:17.485804  178290 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-838183 && echo "running-upgrade-838183" | sudo tee /etc/hostname
	I0108 20:42:17.609520  178290 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-838183
	
	I0108 20:42:17.609625  178290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-838183
	I0108 20:42:17.629857  178290 main.go:141] libmachine: Using SSH client type: native
	I0108 20:42:17.630214  178290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32931 <nil> <nil>}
	I0108 20:42:17.630237  178290 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-838183' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-838183/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-838183' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:42:17.744522  178290 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:42:17.744567  178290 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17907-11003/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-11003/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-11003/.minikube}
	I0108 20:42:17.744597  178290 ubuntu.go:177] setting up certificates
	I0108 20:42:17.744615  178290 provision.go:83] configureAuth start
	I0108 20:42:17.744701  178290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-838183
	I0108 20:42:17.765438  178290 provision.go:138] copyHostCerts
	I0108 20:42:17.765543  178290 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-11003/.minikube/key.pem, removing ...
	I0108 20:42:17.765561  178290 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-11003/.minikube/key.pem
	I0108 20:42:17.765662  178290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-11003/.minikube/key.pem (1679 bytes)
	I0108 20:42:17.765822  178290 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-11003/.minikube/ca.pem, removing ...
	I0108 20:42:17.765837  178290 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-11003/.minikube/ca.pem
	I0108 20:42:17.765885  178290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-11003/.minikube/ca.pem (1078 bytes)
	I0108 20:42:17.766003  178290 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-11003/.minikube/cert.pem, removing ...
	I0108 20:42:17.766021  178290 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-11003/.minikube/cert.pem
	I0108 20:42:17.766067  178290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-11003/.minikube/cert.pem (1123 bytes)
	I0108 20:42:17.766164  178290 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-11003/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-838183 san=[172.17.0.4 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-838183]
	I0108 20:42:17.965505  178290 provision.go:172] copyRemoteCerts
	I0108 20:42:17.965581  178290 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:42:17.965626  178290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-838183
	I0108 20:42:17.985250  178290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32931 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/running-upgrade-838183/id_rsa Username:docker}
	I0108 20:42:18.071603  178290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 20:42:18.094560  178290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 20:42:18.119091  178290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 20:42:18.141274  178290 provision.go:86] duration metric: configureAuth took 396.636097ms
	I0108 20:42:18.141320  178290 ubuntu.go:193] setting minikube options for container-runtime
	I0108 20:42:18.141556  178290 config.go:182] Loaded profile config "running-upgrade-838183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0108 20:42:18.141675  178290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-838183
	I0108 20:42:18.167386  178290 main.go:141] libmachine: Using SSH client type: native
	I0108 20:42:18.167732  178290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32931 <nil> <nil>}
	I0108 20:42:18.167754  178290 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 20:42:19.162141  178290 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 20:42:19.162199  178290 machine.go:91] provisioned docker machine in 1.698428092s
	I0108 20:42:19.162229  178290 start.go:300] post-start starting for "running-upgrade-838183" (driver="docker")
	I0108 20:42:19.162252  178290 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:42:19.162417  178290 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:42:19.162523  178290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-838183
	I0108 20:42:19.184095  178290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32931 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/running-upgrade-838183/id_rsa Username:docker}
	I0108 20:42:19.268747  178290 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:42:19.272056  178290 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 20:42:19.272091  178290 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 20:42:19.272104  178290 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 20:42:19.272114  178290 info.go:137] Remote host: Ubuntu 19.10
	I0108 20:42:19.272131  178290 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-11003/.minikube/addons for local assets ...
	I0108 20:42:19.272215  178290 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-11003/.minikube/files for local assets ...
	I0108 20:42:19.272311  178290 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/ssl/certs/177612.pem -> 177612.pem in /etc/ssl/certs
	I0108 20:42:19.272439  178290 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 20:42:19.280967  178290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/ssl/certs/177612.pem --> /etc/ssl/certs/177612.pem (1708 bytes)
	I0108 20:42:19.302680  178290 start.go:303] post-start completed in 140.434544ms
	I0108 20:42:19.302767  178290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:42:19.302838  178290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-838183
	I0108 20:42:19.322331  178290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32931 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/running-upgrade-838183/id_rsa Username:docker}
	I0108 20:42:19.404173  178290 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 20:42:19.409556  178290 fix.go:56] fixHost completed within 1.975619724s
	I0108 20:42:19.409592  178290 start.go:83] releasing machines lock for "running-upgrade-838183", held for 1.975689565s
	I0108 20:42:19.409670  178290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-838183
	I0108 20:42:19.432604  178290 ssh_runner.go:195] Run: cat /version.json
	I0108 20:42:19.432680  178290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-838183
	I0108 20:42:19.432692  178290 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:42:19.432766  178290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-838183
	I0108 20:42:19.455055  178290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32931 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/running-upgrade-838183/id_rsa Username:docker}
	I0108 20:42:19.456704  178290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32931 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/running-upgrade-838183/id_rsa Username:docker}
	W0108 20:42:19.570556  178290 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0108 20:42:19.570654  178290 ssh_runner.go:195] Run: systemctl --version
	I0108 20:42:19.576224  178290 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 20:42:19.633410  178290 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 20:42:19.638328  178290 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:42:19.727598  178290 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 20:42:19.727701  178290 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:42:19.901546  178290 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 20:42:19.901576  178290 start.go:475] detecting cgroup driver to use...
	I0108 20:42:19.901608  178290 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 20:42:19.901667  178290 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 20:42:19.929484  178290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 20:42:19.942517  178290 docker.go:217] disabling cri-docker service (if available) ...
	I0108 20:42:19.942601  178290 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 20:42:19.955100  178290 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 20:42:19.965221  178290 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0108 20:42:19.974660  178290 docker.go:227] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0108 20:42:19.974782  178290 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 20:42:20.055838  178290 docker.go:233] disabling docker service ...
	I0108 20:42:20.055929  178290 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:42:20.067516  178290 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:42:20.079893  178290 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:42:20.163577  178290 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:42:20.240019  178290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:42:20.252132  178290 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:42:20.299060  178290 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0108 20:42:20.299148  178290 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:42:20.351641  178290 out.go:177] 
	W0108 20:42:20.444470  178290 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0108 20:42:20.444519  178290 out.go:239] * 
	* 
	W0108 20:42:20.445417  178290 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 20:42:20.493825  178290 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-838183 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2024-01-08 20:42:20.647198853 +0000 UTC m=+1980.706878548
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-838183
helpers_test.go:235: (dbg) docker inspect running-upgrade-838183:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d6cb7020019c32f0ba3e0f1a696d74e1c7a7e32d69458f87d7c9878d0baff46e",
	        "Created": "2024-01-08T20:41:11.131776693Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 163545,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T20:41:11.954049107Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/d6cb7020019c32f0ba3e0f1a696d74e1c7a7e32d69458f87d7c9878d0baff46e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d6cb7020019c32f0ba3e0f1a696d74e1c7a7e32d69458f87d7c9878d0baff46e/hostname",
	        "HostsPath": "/var/lib/docker/containers/d6cb7020019c32f0ba3e0f1a696d74e1c7a7e32d69458f87d7c9878d0baff46e/hosts",
	        "LogPath": "/var/lib/docker/containers/d6cb7020019c32f0ba3e0f1a696d74e1c7a7e32d69458f87d7c9878d0baff46e/d6cb7020019c32f0ba3e0f1a696d74e1c7a7e32d69458f87d7c9878d0baff46e-json.log",
	        "Name": "/running-upgrade-838183",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-838183:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6902b0553cecded2e4c67bd6e3437ff0c86a3c669f9df3dda5824c37ae4e970c-init/diff:/var/lib/docker/overlay2/d4ddc8782a26b7de4633ac4fe92936794bbafd0e290ae5332795e25a58f06892/diff:/var/lib/docker/overlay2/cb7c64b5e5fd75f3ae804df2d3e3e7b7c50a5beaa5b23cd92faa213bd519b848/diff:/var/lib/docker/overlay2/2faa4a99e9b94ef2ea05ce75bb3b23448a05587c0c1cc11374c34110a6a9fd4c/diff:/var/lib/docker/overlay2/de8dab1b8d34bfdc2ba142b0430abc845d13df2ba1c53f2ea3bd18947f44071f/diff:/var/lib/docker/overlay2/c81f587f035d768c1c45cf4d4aa6efd5b04996520a98f6660c9c536147f659f6/diff:/var/lib/docker/overlay2/406a2bc6d7d0e4d8b8d5fc4bfd9f07a3f2cfc218b467aa97d33e8a8ed0b0b92c/diff:/var/lib/docker/overlay2/2c6204aa68de44d5119ba8b3d15bf5884fc9f3e02825ec550c10ef4c75ac1349/diff:/var/lib/docker/overlay2/85d6d440da09a068f3a696f059cb979895dd4cb1a27f8e7c80dc64816fd20e3a/diff:/var/lib/docker/overlay2/2de9a5ab17be3c349917a93bb8df9a5570587b06cee0ec304ce1348b47aadb6d/diff:/var/lib/docker/overlay2/f0128d
f039f06b5ab6e31ed8baaca42a8de6c3b02d2ee79555a83d0673727219/diff:/var/lib/docker/overlay2/4e9d862e1c323d71ef56c061849fc0238b9e905d2b7eaf18cf6b5b77a16295fa/diff:/var/lib/docker/overlay2/2e65fdaecdda1fddfc4283a1f7314e300a096cecbb7792f28116706c48d51c72/diff:/var/lib/docker/overlay2/ba4e874116c1c0f2ab994a27eda20f2d4299cb702a7b4c16fdf588a96dfecafe/diff:/var/lib/docker/overlay2/b362cf6706064ea5f5cab737c750f756cffd65bb65e8d47a9334186f8477f60c/diff:/var/lib/docker/overlay2/069f04e3489f53af7a1c195ed689512cbeafb34be69a7ee214eb373c97b90ad0/diff:/var/lib/docker/overlay2/df9e00b2275f444e95b41fecabf9ae272318aae59308ed54a38d0c85b056731f/diff:/var/lib/docker/overlay2/e9e2f2b7a71cb201df27ac837dc04a6510172da8f241e3e85a6e311fcfd0b289/diff:/var/lib/docker/overlay2/456291182fb0ff23975a9efeb8d922672c420a5e9d37f2b91d48a85265627c11/diff:/var/lib/docker/overlay2/da237e94d897baf03a10bf056281669582a26d7517edcd39dc87f24fd00eec98/diff:/var/lib/docker/overlay2/55f3bb11f19f4a02192d84d92d654673a20b83b26a5ca32b2dac96127431def8/diff:/var/lib/d
ocker/overlay2/28c953d71216325676fb26cd95ed31aec67b7db1d0c69e093150302c9807c8f7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6902b0553cecded2e4c67bd6e3437ff0c86a3c669f9df3dda5824c37ae4e970c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6902b0553cecded2e4c67bd6e3437ff0c86a3c669f9df3dda5824c37ae4e970c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6902b0553cecded2e4c67bd6e3437ff0c86a3c669f9df3dda5824c37ae4e970c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-838183",
	                "Source": "/var/lib/docker/volumes/running-upgrade-838183/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-838183",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-838183",
	                "name.minikube.sigs.k8s.io": "running-upgrade-838183",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d11b40006925f0790fd8b919a38751d45dae031103752c39e7f63238a276ceeb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32931"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32930"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32929"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d11b40006925",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "b90ae97bbee1adbbed0368422ea0678293eb1f54f2051197d3069f6001218882",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.4",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:04",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "8e7958ebaacdef80def65f77c4adf2f6058e684ded6901e79a0757fc135b545d",
	                    "EndpointID": "b90ae97bbee1adbbed0368422ea0678293eb1f54f2051197d3069f6001218882",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.4",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:04",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-838183 -n running-upgrade-838183
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-838183 -n running-upgrade-838183: exit status 4 (362.983298ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 20:42:20.992680  178890 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-838183" does not appear in /home/jenkins/minikube-integration/17907-11003/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-838183" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-838183" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-838183
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-838183: (1.991583162s)
--- FAIL: TestRunningBinaryUpgrade (73.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (97.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.9.0.151356420.exe start -p stopped-upgrade-181266 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.9.0.151356420.exe start -p stopped-upgrade-181266 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m19.789926102s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.9.0.151356420.exe -p stopped-upgrade-181266 stop
E0108 20:41:55.069650   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt: no such file or directory
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.9.0.151356420.exe -p stopped-upgrade-181266 stop: (11.212024805s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-181266 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-181266 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.132723926s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-181266] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-11003/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-11003/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-181266 in cluster stopped-upgrade-181266
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Restarting existing docker container for "stopped-upgrade-181266" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:42:05.195760  175167 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:42:05.195948  175167 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:42:05.195956  175167 out.go:309] Setting ErrFile to fd 2...
	I0108 20:42:05.195964  175167 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:42:05.196286  175167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-11003/.minikube/bin
	I0108 20:42:05.197053  175167 out.go:303] Setting JSON to false
	I0108 20:42:05.199313  175167 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5051,"bootTime":1704741474,"procs":598,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:42:05.199456  175167 start.go:138] virtualization: kvm guest
	I0108 20:42:05.202936  175167 out.go:177] * [stopped-upgrade-181266] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:42:05.205109  175167 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:42:05.205116  175167 notify.go:220] Checking for updates...
	I0108 20:42:05.206922  175167 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:42:05.208825  175167 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-11003/kubeconfig
	I0108 20:42:05.212254  175167 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-11003/.minikube
	I0108 20:42:05.214118  175167 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 20:42:05.215909  175167 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:42:05.218303  175167 config.go:182] Loaded profile config "stopped-upgrade-181266": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0108 20:42:05.218348  175167 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0108 20:42:05.221131  175167 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0108 20:42:05.222903  175167 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:42:05.256955  175167 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:42:05.257290  175167 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:42:05.381914  175167 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:82 OomKillDisable:true NGoroutines:79 SystemTime:2024-01-08 20:42:05.366586177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:42:05.382091  175167 docker.go:295] overlay module found
	I0108 20:42:05.385232  175167 out.go:177] * Using the docker driver based on existing profile
	I0108 20:42:05.387292  175167 start.go:298] selected driver: docker
	I0108 20:42:05.387328  175167 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-181266 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-181266 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0108 20:42:05.387565  175167 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:42:05.388637  175167 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:42:05.460411  175167 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:82 OomKillDisable:true NGoroutines:79 SystemTime:2024-01-08 20:42:05.449052869 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:42:05.460970  175167 cni.go:84] Creating CNI manager for ""
	I0108 20:42:05.460997  175167 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0108 20:42:05.461009  175167 start_flags.go:323] config:
	{Name:stopped-upgrade-181266 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-181266 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I0108 20:42:05.464628  175167 out.go:177] * Starting control plane node stopped-upgrade-181266 in cluster stopped-upgrade-181266
	I0108 20:42:05.466413  175167 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 20:42:05.467936  175167 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:42:05.469405  175167 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0108 20:42:05.469522  175167 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0108 20:42:05.486781  175167 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0108 20:42:05.486832  175167 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	W0108 20:42:05.502542  175167 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0108 20:42:05.502674  175167 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/stopped-upgrade-181266/config.json ...
	I0108 20:42:05.502799  175167 cache.go:107] acquiring lock: {Name:mkef70d9482c6b8293bc341d6d74740871f0b346 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:42:05.502826  175167 cache.go:107] acquiring lock: {Name:mkfe56eabb4845c24541670fe01dff353f2a9610 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:42:05.502818  175167 cache.go:107] acquiring lock: {Name:mkbdc7816af42985ec3faea6eb369b3f79d337bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:42:05.502977  175167 cache.go:194] Successfully downloaded all kic artifacts
	I0108 20:42:05.502956  175167 cache.go:107] acquiring lock: {Name:mkfeda513bc66c61e081cda1f34dce4df6583868 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:42:05.503009  175167 cache.go:115] /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0108 20:42:05.502876  175167 cache.go:107] acquiring lock: {Name:mkb65e6efbc3de1903cfd8168e6651e2050d10ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:42:05.503008  175167 cache.go:107] acquiring lock: {Name:mk6db05523f201826cea55b8cf29218150138037 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:42:05.503011  175167 start.go:365] acquiring machines lock for stopped-upgrade-181266: {Name:mkdf7c814122bf0f9b6723b26e174a66d554c44a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:42:05.503031  175167 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 211.053µs
	I0108 20:42:05.503126  175167 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0108 20:42:05.502940  175167 cache.go:115] /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0108 20:42:05.503143  175167 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 355.351µs
	I0108 20:42:05.503152  175167 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0108 20:42:05.503138  175167 cache.go:107] acquiring lock: {Name:mkb0599c97382983041e723f866c7f01c300e9c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:42:05.503140  175167 cache.go:107] acquiring lock: {Name:mkd75ce0d4fa468fb7a21624f0ca65e9f5547ae5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:42:05.503204  175167 start.go:369] acquired machines lock for "stopped-upgrade-181266" in 78.535µs
	I0108 20:42:05.503230  175167 start.go:96] Skipping create...Using existing machine configuration
	I0108 20:42:05.503240  175167 fix.go:54] fixHost starting: m01
	I0108 20:42:05.503285  175167 cache.go:115] /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0108 20:42:05.503303  175167 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 189.256µs
	I0108 20:42:05.503326  175167 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0108 20:42:05.503510  175167 cli_runner.go:164] Run: docker container inspect stopped-upgrade-181266 --format={{.State.Status}}
	I0108 20:42:05.523873  175167 fix.go:102] recreateIfNeeded on stopped-upgrade-181266: state=Stopped err=<nil>
	W0108 20:42:05.523932  175167 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 20:42:05.527046  175167 out.go:177] * Restarting existing docker container for "stopped-upgrade-181266" ...
	I0108 20:42:05.528712  175167 cli_runner.go:164] Run: docker start stopped-upgrade-181266
	I0108 20:42:05.778111  175167 cache.go:115] /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0108 20:42:05.778147  175167 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 275.041199ms
	I0108 20:42:05.778166  175167 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0108 20:42:05.889427  175167 cli_runner.go:164] Run: docker container inspect stopped-upgrade-181266 --format={{.State.Status}}
	I0108 20:42:05.914772  175167 kic.go:430] container "stopped-upgrade-181266" state is running.
	I0108 20:42:05.924205  175167 cache.go:115] /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0108 20:42:05.924235  175167 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 421.25876ms
	I0108 20:42:05.924301  175167 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0108 20:42:05.949108  175167 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-181266
	I0108 20:42:05.973069  175167 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/stopped-upgrade-181266/config.json ...
	I0108 20:42:05.973368  175167 machine.go:88] provisioning docker machine ...
	I0108 20:42:05.973405  175167 ubuntu.go:169] provisioning hostname "stopped-upgrade-181266"
	I0108 20:42:05.973468  175167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-181266
	I0108 20:42:05.995473  175167 main.go:141] libmachine: Using SSH client type: native
	I0108 20:42:05.996451  175167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32939 <nil> <nil>}
	I0108 20:42:05.996478  175167 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-181266 && echo "stopped-upgrade-181266" | sudo tee /etc/hostname
	I0108 20:42:05.997490  175167 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43312->127.0.0.1:32939: read: connection reset by peer
	I0108 20:42:06.046862  175167 cache.go:115] /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0108 20:42:06.046920  175167 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 544.09084ms
	I0108 20:42:06.046943  175167 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0108 20:42:06.568705  175167 cache.go:115] /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0108 20:42:06.568730  175167 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 1.065779532s
	I0108 20:42:06.568740  175167 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0108 20:42:06.619010  175167 cache.go:115] /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0108 20:42:06.619048  175167 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.116179057s
	I0108 20:42:06.619069  175167 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17907-11003/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0108 20:42:06.619083  175167 cache.go:87] Successfully saved all images to host disk.
	I0108 20:42:09.132364  175167 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-181266
	
	I0108 20:42:09.132468  175167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-181266
	I0108 20:42:09.161050  175167 main.go:141] libmachine: Using SSH client type: native
	I0108 20:42:09.161626  175167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32939 <nil> <nil>}
	I0108 20:42:09.161654  175167 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-181266' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-181266/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-181266' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:42:09.280217  175167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:42:09.280256  175167 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17907-11003/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-11003/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-11003/.minikube}
	I0108 20:42:09.280303  175167 ubuntu.go:177] setting up certificates
	I0108 20:42:09.280322  175167 provision.go:83] configureAuth start
	I0108 20:42:09.280408  175167 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-181266
	I0108 20:42:09.317098  175167 provision.go:138] copyHostCerts
	I0108 20:42:09.317208  175167 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-11003/.minikube/ca.pem, removing ...
	I0108 20:42:09.317227  175167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-11003/.minikube/ca.pem
	I0108 20:42:09.317351  175167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-11003/.minikube/ca.pem (1078 bytes)
	I0108 20:42:09.317498  175167 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-11003/.minikube/cert.pem, removing ...
	I0108 20:42:09.317510  175167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-11003/.minikube/cert.pem
	I0108 20:42:09.317552  175167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-11003/.minikube/cert.pem (1123 bytes)
	I0108 20:42:09.317626  175167 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-11003/.minikube/key.pem, removing ...
	I0108 20:42:09.317632  175167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-11003/.minikube/key.pem
	I0108 20:42:09.317661  175167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-11003/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-11003/.minikube/key.pem (1679 bytes)
	I0108 20:42:09.317788  175167 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-11003/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-181266 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-181266]
	I0108 20:42:09.502109  175167 provision.go:172] copyRemoteCerts
	I0108 20:42:09.502236  175167 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:42:09.502317  175167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-181266
	I0108 20:42:09.521633  175167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32939 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/stopped-upgrade-181266/id_rsa Username:docker}
	I0108 20:42:09.611015  175167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 20:42:09.630689  175167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 20:42:09.650866  175167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 20:42:09.669524  175167 provision.go:86] duration metric: configureAuth took 389.184096ms
	I0108 20:42:09.669553  175167 ubuntu.go:193] setting minikube options for container-runtime
	I0108 20:42:09.669764  175167 config.go:182] Loaded profile config "stopped-upgrade-181266": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0108 20:42:09.669881  175167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-181266
	I0108 20:42:09.687107  175167 main.go:141] libmachine: Using SSH client type: native
	I0108 20:42:09.687694  175167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 127.0.0.1 32939 <nil> <nil>}
	I0108 20:42:09.687741  175167 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 20:42:10.334915  175167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 20:42:10.334941  175167 machine.go:91] provisioned docker machine in 4.361553673s
	I0108 20:42:10.334962  175167 start.go:300] post-start starting for "stopped-upgrade-181266" (driver="docker")
	I0108 20:42:10.334975  175167 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:42:10.335037  175167 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:42:10.335086  175167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-181266
	I0108 20:42:10.354107  175167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32939 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/stopped-upgrade-181266/id_rsa Username:docker}
	I0108 20:42:10.440702  175167 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:42:10.444687  175167 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 20:42:10.444724  175167 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 20:42:10.444735  175167 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 20:42:10.444745  175167 info.go:137] Remote host: Ubuntu 19.10
	I0108 20:42:10.444763  175167 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-11003/.minikube/addons for local assets ...
	I0108 20:42:10.444852  175167 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-11003/.minikube/files for local assets ...
	I0108 20:42:10.444961  175167 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/ssl/certs/177612.pem -> 177612.pem in /etc/ssl/certs
	I0108 20:42:10.445098  175167 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 20:42:10.453679  175167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/ssl/certs/177612.pem --> /etc/ssl/certs/177612.pem (1708 bytes)
	I0108 20:42:10.473918  175167 start.go:303] post-start completed in 138.937845ms
	I0108 20:42:10.474014  175167 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:42:10.474069  175167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-181266
	I0108 20:42:10.496113  175167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32939 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/stopped-upgrade-181266/id_rsa Username:docker}
	I0108 20:42:10.577137  175167 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 20:42:10.581255  175167 fix.go:56] fixHost completed within 5.078011787s
	I0108 20:42:10.581281  175167 start.go:83] releasing machines lock for "stopped-upgrade-181266", held for 5.07805904s
	I0108 20:42:10.581355  175167 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-181266
	I0108 20:42:10.601646  175167 ssh_runner.go:195] Run: cat /version.json
	I0108 20:42:10.601699  175167 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:42:10.601719  175167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-181266
	I0108 20:42:10.601806  175167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-181266
	I0108 20:42:10.625414  175167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32939 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/stopped-upgrade-181266/id_rsa Username:docker}
	I0108 20:42:10.625409  175167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32939 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/stopped-upgrade-181266/id_rsa Username:docker}
	W0108 20:42:10.747684  175167 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0108 20:42:10.747781  175167 ssh_runner.go:195] Run: systemctl --version
	I0108 20:42:10.753125  175167 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 20:42:10.808960  175167 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 20:42:10.814005  175167 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:42:10.831444  175167 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 20:42:10.831531  175167 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:42:10.858960  175167 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 20:42:10.858997  175167 start.go:475] detecting cgroup driver to use...
	I0108 20:42:10.859037  175167 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 20:42:10.859083  175167 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 20:42:10.883771  175167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 20:42:10.895856  175167 docker.go:217] disabling cri-docker service (if available) ...
	I0108 20:42:10.895932  175167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 20:42:10.907956  175167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 20:42:10.917887  175167 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0108 20:42:10.927964  175167 docker.go:227] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0108 20:42:10.928055  175167 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 20:42:10.997770  175167 docker.go:233] disabling docker service ...
	I0108 20:42:10.997849  175167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:42:11.012905  175167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:42:11.030897  175167 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:42:11.103907  175167 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:42:11.178085  175167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:42:11.189322  175167 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:42:11.205340  175167 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0108 20:42:11.205436  175167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:42:11.217821  175167 out.go:177] 
	W0108 20:42:11.219745  175167 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0108 20:42:11.219784  175167 out.go:239] * 
	* 
	W0108 20:42:11.220778  175167 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 20:42:11.222089  175167 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-181266 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (97.15s)

                                                
                                    

Test pass (284/316)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 7.69
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.4/json-events 7.27
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.09
17 TestDownloadOnly/v1.29.0-rc.2/json-events 12.59
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.09
23 TestDownloadOnly/DeleteAll 0.25
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.15
25 TestDownloadOnlyKic 1.38
26 TestBinaryMirror 0.82
27 TestOffline 57.57
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
32 TestAddons/Setup 146.33
34 TestAddons/parallel/Registry 13.65
36 TestAddons/parallel/InspektorGadget 10.74
37 TestAddons/parallel/MetricsServer 5.73
38 TestAddons/parallel/HelmTiller 12.4
40 TestAddons/parallel/CSI 78.85
41 TestAddons/parallel/Headlamp 13.29
42 TestAddons/parallel/CloudSpanner 6.53
43 TestAddons/parallel/LocalPath 54.19
44 TestAddons/parallel/NvidiaDevicePlugin 5.99
45 TestAddons/parallel/Yakd 5.01
48 TestAddons/serial/GCPAuth/Namespaces 0.13
49 TestAddons/StoppedEnableDisable 12.36
50 TestCertOptions 31.53
51 TestCertExpiration 228.39
53 TestForceSystemdFlag 29.39
54 TestForceSystemdEnv 32.59
56 TestKVMDriverInstallOrUpdate 3.43
60 TestErrorSpam/setup 25.85
61 TestErrorSpam/start 0.73
62 TestErrorSpam/status 1.02
63 TestErrorSpam/pause 1.7
64 TestErrorSpam/unpause 1.7
65 TestErrorSpam/stop 1.48
68 TestFunctional/serial/CopySyncFile 0
69 TestFunctional/serial/StartWithProxy 38.18
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 35.58
72 TestFunctional/serial/KubeContext 0.05
73 TestFunctional/serial/KubectlGetPods 0.08
76 TestFunctional/serial/CacheCmd/cache/add_remote 3.14
77 TestFunctional/serial/CacheCmd/cache/add_local 1.27
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
79 TestFunctional/serial/CacheCmd/cache/list 0.08
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
81 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
82 TestFunctional/serial/CacheCmd/cache/delete 0.14
83 TestFunctional/serial/MinikubeKubectlCmd 0.14
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
85 TestFunctional/serial/ExtraConfig 31.66
86 TestFunctional/serial/ComponentHealth 0.07
87 TestFunctional/serial/LogsCmd 1.57
88 TestFunctional/serial/LogsFileCmd 1.58
89 TestFunctional/serial/InvalidService 3.84
91 TestFunctional/parallel/ConfigCmd 0.6
92 TestFunctional/parallel/DashboardCmd 9.87
93 TestFunctional/parallel/DryRun 0.53
94 TestFunctional/parallel/InternationalLanguage 0.2
95 TestFunctional/parallel/StatusCmd 1.19
99 TestFunctional/parallel/ServiceCmdConnect 11.59
100 TestFunctional/parallel/AddonsCmd 0.17
101 TestFunctional/parallel/PersistentVolumeClaim 34.83
103 TestFunctional/parallel/SSHCmd 0.75
104 TestFunctional/parallel/CpCmd 1.99
105 TestFunctional/parallel/MySQL 22.61
106 TestFunctional/parallel/FileSync 0.32
107 TestFunctional/parallel/CertSync 2.13
111 TestFunctional/parallel/NodeLabels 0.11
113 TestFunctional/parallel/NonActiveRuntimeDisabled 0.65
115 TestFunctional/parallel/License 0.2
116 TestFunctional/parallel/ServiceCmd/DeployApp 11.23
118 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.47
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.24
122 TestFunctional/parallel/ServiceCmd/List 0.56
123 TestFunctional/parallel/ServiceCmd/JSONOutput 0.57
124 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
125 TestFunctional/parallel/ServiceCmd/Format 0.38
126 TestFunctional/parallel/ServiceCmd/URL 0.4
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
128 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
132 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
133 TestFunctional/parallel/Version/short 0.09
134 TestFunctional/parallel/Version/components 1.37
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.35
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.4
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.35
139 TestFunctional/parallel/ImageCommands/ImageBuild 4.33
140 TestFunctional/parallel/ImageCommands/Setup 1.03
141 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
142 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
143 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 6.16
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
146 TestFunctional/parallel/MountCmd/any-port 17.31
147 TestFunctional/parallel/ProfileCmd/profile_list 0.56
148 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 7.58
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.3
151 TestFunctional/parallel/MountCmd/specific-port 2.03
152 TestFunctional/parallel/MountCmd/VerifyCleanup 2.33
153 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.96
154 TestFunctional/parallel/ImageCommands/ImageRemove 0.69
155 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.23
156 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.98
157 TestFunctional/delete_addon-resizer_images 0.09
158 TestFunctional/delete_my-image_image 0.02
159 TestFunctional/delete_minikube_cached_images 0.02
163 TestIngressAddonLegacy/StartLegacyK8sCluster 77.81
165 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.96
166 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.6
170 TestJSONOutput/start/Command 40.9
171 TestJSONOutput/start/Audit 0
173 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/pause/Command 0.72
177 TestJSONOutput/pause/Audit 0
179 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/unpause/Command 0.67
183 TestJSONOutput/unpause/Audit 0
185 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/stop/Command 5.86
189 TestJSONOutput/stop/Audit 0
191 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
193 TestErrorJSONOutput 0.27
195 TestKicCustomNetwork/create_custom_network 32.57
196 TestKicCustomNetwork/use_default_bridge_network 28.33
197 TestKicExistingNetwork 29.65
198 TestKicCustomSubnet 29.48
199 TestKicStaticIP 29.42
200 TestMainNoArgs 0.07
201 TestMinikubeProfile 54.48
204 TestMountStart/serial/StartWithMountFirst 5.84
205 TestMountStart/serial/VerifyMountFirst 0.29
206 TestMountStart/serial/StartWithMountSecond 8.53
207 TestMountStart/serial/VerifyMountSecond 0.28
208 TestMountStart/serial/DeleteFirst 1.71
209 TestMountStart/serial/VerifyMountPostDelete 0.28
210 TestMountStart/serial/Stop 1.25
211 TestMountStart/serial/RestartStopped 7.35
212 TestMountStart/serial/VerifyMountPostStop 0.29
215 TestMultiNode/serial/FreshStart2Nodes 135.59
216 TestMultiNode/serial/DeployApp2Nodes 3.97
218 TestMultiNode/serial/AddNode 21.26
219 TestMultiNode/serial/MultiNodeLabels 0.07
220 TestMultiNode/serial/ProfileList 0.31
221 TestMultiNode/serial/CopyFile 10.29
222 TestMultiNode/serial/StopNode 2.27
223 TestMultiNode/serial/StartAfterStop 11.14
224 TestMultiNode/serial/RestartKeepsNodes 119.97
225 TestMultiNode/serial/DeleteNode 4.88
226 TestMultiNode/serial/StopMultiNode 24.07
227 TestMultiNode/serial/RestartMultiNode 77.97
228 TestMultiNode/serial/ValidateNameConflict 28.65
233 TestPreload 141.21
235 TestScheduledStopUnix 103.1
238 TestInsufficientStorage 13.77
241 TestKubernetesUpgrade 360.17
242 TestMissingContainerUpgrade 178.04
250 TestNetworkPlugins/group/false 9.1
254 TestStoppedBinaryUpgrade/Setup 0.43
256 TestStoppedBinaryUpgrade/MinikubeLogs 0.63
265 TestPause/serial/Start 44.85
267 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
268 TestNoKubernetes/serial/StartWithK8s 27.43
269 TestNoKubernetes/serial/StartWithStopK8s 8.57
270 TestPause/serial/SecondStartNoReconfiguration 30.39
271 TestNoKubernetes/serial/Start 5.25
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
273 TestNoKubernetes/serial/ProfileList 2.63
274 TestNoKubernetes/serial/Stop 1.3
275 TestNoKubernetes/serial/StartNoArgs 9.09
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
277 TestPause/serial/Pause 0.98
278 TestPause/serial/VerifyStatus 0.46
279 TestPause/serial/Unpause 0.91
280 TestPause/serial/PauseAgain 0.94
281 TestPause/serial/DeletePaused 3.01
282 TestPause/serial/VerifyDeletedResources 0.77
283 TestNetworkPlugins/group/auto/Start 72.12
284 TestNetworkPlugins/group/kindnet/Start 70.9
285 TestNetworkPlugins/group/auto/KubeletFlags 0.32
286 TestNetworkPlugins/group/auto/NetCatPod 9.24
287 TestNetworkPlugins/group/auto/DNS 0.18
288 TestNetworkPlugins/group/auto/Localhost 0.16
289 TestNetworkPlugins/group/auto/HairPin 0.16
290 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
291 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
292 TestNetworkPlugins/group/kindnet/NetCatPod 10.27
293 TestNetworkPlugins/group/kindnet/DNS 0.22
294 TestNetworkPlugins/group/kindnet/Localhost 0.16
295 TestNetworkPlugins/group/kindnet/HairPin 0.17
296 TestNetworkPlugins/group/calico/Start 67.58
297 TestNetworkPlugins/group/custom-flannel/Start 66.12
298 TestNetworkPlugins/group/enable-default-cni/Start 80.19
299 TestNetworkPlugins/group/calico/ControllerPod 6.01
300 TestNetworkPlugins/group/calico/KubeletFlags 0.33
301 TestNetworkPlugins/group/calico/NetCatPod 10.2
302 TestNetworkPlugins/group/calico/DNS 0.18
303 TestNetworkPlugins/group/calico/Localhost 0.14
304 TestNetworkPlugins/group/calico/HairPin 0.16
305 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.41
306 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.39
307 TestNetworkPlugins/group/custom-flannel/DNS 0.24
308 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
309 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
310 TestNetworkPlugins/group/flannel/Start 66.79
311 TestNetworkPlugins/group/bridge/Start 84.56
312 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.45
313 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.74
315 TestStartStop/group/old-k8s-version/serial/FirstStart 118.18
316 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
317 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
318 TestNetworkPlugins/group/enable-default-cni/HairPin 0.22
320 TestStartStop/group/no-preload/serial/FirstStart 72.91
321 TestNetworkPlugins/group/flannel/ControllerPod 6.01
322 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
323 TestNetworkPlugins/group/flannel/NetCatPod 9.22
324 TestNetworkPlugins/group/flannel/DNS 0.17
325 TestNetworkPlugins/group/flannel/Localhost 0.15
326 TestNetworkPlugins/group/flannel/HairPin 0.13
327 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
328 TestNetworkPlugins/group/bridge/NetCatPod 11.22
329 TestNetworkPlugins/group/bridge/DNS 0.18
330 TestNetworkPlugins/group/bridge/Localhost 0.19
331 TestNetworkPlugins/group/bridge/HairPin 0.15
333 TestStartStop/group/embed-certs/serial/FirstStart 47.23
334 TestStartStop/group/no-preload/serial/DeployApp 8.32
336 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 39.93
337 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.97
338 TestStartStop/group/no-preload/serial/Stop 12.32
339 TestStartStop/group/old-k8s-version/serial/DeployApp 7.44
340 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.28
341 TestStartStop/group/no-preload/serial/SecondStart 341.24
342 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.12
343 TestStartStop/group/old-k8s-version/serial/Stop 12.37
344 TestStartStop/group/embed-certs/serial/DeployApp 8.32
345 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.3
346 TestStartStop/group/old-k8s-version/serial/SecondStart 428.3
347 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.36
348 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.28
349 TestStartStop/group/embed-certs/serial/Stop 12.29
350 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
351 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.26
352 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
353 TestStartStop/group/embed-certs/serial/SecondStart 613.58
354 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.27
355 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 348.57
356 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.01
357 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.08
358 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
359 TestStartStop/group/no-preload/serial/Pause 3.42
361 TestStartStop/group/newest-cni/serial/FirstStart 41.41
362 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 8.01
363 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
364 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
365 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.34
366 TestStartStop/group/newest-cni/serial/DeployApp 0
367 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.99
368 TestStartStop/group/newest-cni/serial/Stop 3.72
369 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
370 TestStartStop/group/newest-cni/serial/SecondStart 26.13
371 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
372 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
373 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
374 TestStartStop/group/newest-cni/serial/Pause 2.7
375 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
376 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
377 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
378 TestStartStop/group/old-k8s-version/serial/Pause 2.67
379 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
380 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
381 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
382 TestStartStop/group/embed-certs/serial/Pause 2.62
x
+
TestDownloadOnly/v1.16.0/json-events (7.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-529405 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-529405 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.693992068s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (7.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-529405
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-529405: exit status 85 (84.184071ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-529405 | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC |          |
	|         | -p download-only-529405        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:09:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:09:20.060860   17773 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:09:20.061117   17773 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:09:20.061126   17773 out.go:309] Setting ErrFile to fd 2...
	I0108 20:09:20.061131   17773 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:09:20.061319   17773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-11003/.minikube/bin
	W0108 20:09:20.061459   17773 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17907-11003/.minikube/config/config.json: open /home/jenkins/minikube-integration/17907-11003/.minikube/config/config.json: no such file or directory
	I0108 20:09:20.062168   17773 out.go:303] Setting JSON to true
	I0108 20:09:20.063254   17773 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3086,"bootTime":1704741474,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:09:20.063335   17773 start.go:138] virtualization: kvm guest
	I0108 20:09:20.066174   17773 out.go:97] [download-only-529405] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:09:20.068127   17773 out.go:169] MINIKUBE_LOCATION=17907
	W0108 20:09:20.066320   17773 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17907-11003/.minikube/cache/preloaded-tarball: no such file or directory
	I0108 20:09:20.066403   17773 notify.go:220] Checking for updates...
	I0108 20:09:20.071623   17773 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:09:20.073286   17773 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17907-11003/kubeconfig
	I0108 20:09:20.075228   17773 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-11003/.minikube
	I0108 20:09:20.077255   17773 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0108 20:09:20.080457   17773 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 20:09:20.080734   17773 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:09:20.105464   17773 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:09:20.105607   17773 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:09:20.503182   17773 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-08 20:09:20.492977525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:09:20.503330   17773 docker.go:295] overlay module found
	I0108 20:09:20.505838   17773 out.go:97] Using the docker driver based on user configuration
	I0108 20:09:20.505879   17773 start.go:298] selected driver: docker
	I0108 20:09:20.505888   17773 start.go:902] validating driver "docker" against <nil>
	I0108 20:09:20.506015   17773 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:09:20.569555   17773 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-08 20:09:20.559977297 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:09:20.569710   17773 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 20:09:20.570196   17773 start_flags.go:394] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0108 20:09:20.570385   17773 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0108 20:09:20.572837   17773 out.go:169] Using Docker driver with root privileges
	I0108 20:09:20.575056   17773 cni.go:84] Creating CNI manager for ""
	I0108 20:09:20.575100   17773 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 20:09:20.575115   17773 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 20:09:20.575130   17773 start_flags.go:323] config:
	{Name:download-only-529405 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-529405 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:09:20.577102   17773 out.go:97] Starting control plane node download-only-529405 in cluster download-only-529405
	I0108 20:09:20.577127   17773 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 20:09:20.578651   17773 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:09:20.578679   17773 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0108 20:09:20.578743   17773 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0108 20:09:20.597734   17773 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0108 20:09:20.597893   17773 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I0108 20:09:20.597974   17773 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0108 20:09:20.611923   17773 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0108 20:09:20.611953   17773 cache.go:56] Caching tarball of preloaded images
	I0108 20:09:20.612081   17773 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0108 20:09:20.614562   17773 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0108 20:09:20.614587   17773 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0108 20:09:20.651869   17773 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17907-11003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0108 20:09:24.549836   17773 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I0108 20:09:24.616469   17773 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0108 20:09:24.616592   17773 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17907-11003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-529405"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (7.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-529405 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-529405 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.270451252s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (7.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-529405
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-529405: exit status 85 (91.118801ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-529405 | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC |          |
	|         | -p download-only-529405        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-529405 | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC |          |
	|         | -p download-only-529405        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:09:27
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:09:27.846201   17930 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:09:27.846371   17930 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:09:27.846380   17930 out.go:309] Setting ErrFile to fd 2...
	I0108 20:09:27.846384   17930 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:09:27.846579   17930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-11003/.minikube/bin
	W0108 20:09:27.846695   17930 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17907-11003/.minikube/config/config.json: open /home/jenkins/minikube-integration/17907-11003/.minikube/config/config.json: no such file or directory
	I0108 20:09:27.847185   17930 out.go:303] Setting JSON to true
	I0108 20:09:27.848226   17930 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3094,"bootTime":1704741474,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:09:27.848304   17930 start.go:138] virtualization: kvm guest
	I0108 20:09:27.851114   17930 out.go:97] [download-only-529405] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:09:27.853416   17930 out.go:169] MINIKUBE_LOCATION=17907
	I0108 20:09:27.851348   17930 notify.go:220] Checking for updates...
	I0108 20:09:27.855770   17930 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:09:27.857883   17930 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17907-11003/kubeconfig
	I0108 20:09:27.859873   17930 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-11003/.minikube
	I0108 20:09:27.861644   17930 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0108 20:09:27.865169   17930 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 20:09:27.865717   17930 config.go:182] Loaded profile config "download-only-529405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0108 20:09:27.865789   17930 start.go:810] api.Load failed for download-only-529405: filestore "download-only-529405": Docker machine "download-only-529405" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 20:09:27.865913   17930 driver.go:392] Setting default libvirt URI to qemu:///system
	W0108 20:09:27.865988   17930 start.go:810] api.Load failed for download-only-529405: filestore "download-only-529405": Docker machine "download-only-529405" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 20:09:27.892934   17930 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:09:27.893085   17930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:09:27.957041   17930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:42 SystemTime:2024-01-08 20:09:27.946678385 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:09:27.957163   17930 docker.go:295] overlay module found
	I0108 20:09:27.959338   17930 out.go:97] Using the docker driver based on existing profile
	I0108 20:09:27.959387   17930 start.go:298] selected driver: docker
	I0108 20:09:27.959396   17930 start.go:902] validating driver "docker" against &{Name:download-only-529405 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-529405 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:09:27.959609   17930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:09:28.019034   17930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:42 SystemTime:2024-01-08 20:09:28.009045165 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:09:28.019855   17930 cni.go:84] Creating CNI manager for ""
	I0108 20:09:28.019876   17930 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 20:09:28.019890   17930 start_flags.go:323] config:
	{Name:download-only-529405 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-529405 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I0108 20:09:28.022515   17930 out.go:97] Starting control plane node download-only-529405 in cluster download-only-529405
	I0108 20:09:28.022543   17930 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 20:09:28.024287   17930 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:09:28.024337   17930 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:09:28.024438   17930 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0108 20:09:28.042168   17930 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0108 20:09:28.042318   17930 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I0108 20:09:28.042337   17930 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory, skipping pull
	I0108 20:09:28.042351   17930 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in cache, skipping pull
	I0108 20:09:28.042367   17930 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I0108 20:09:28.054329   17930 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 20:09:28.054361   17930 cache.go:56] Caching tarball of preloaded images
	I0108 20:09:28.054552   17930 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:09:28.057129   17930 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0108 20:09:28.057166   17930 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0108 20:09:28.106071   17930 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/17907-11003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-529405"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (12.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-529405 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-529405 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.589084894s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (12.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-529405
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-529405: exit status 85 (85.140676ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-529405 | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC |          |
	|         | -p download-only-529405           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-529405 | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC |          |
	|         | -p download-only-529405           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-529405 | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC |          |
	|         | -p download-only-529405           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:09:35
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:09:35.209195   18074 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:09:35.209319   18074 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:09:35.209327   18074 out.go:309] Setting ErrFile to fd 2...
	I0108 20:09:35.209332   18074 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:09:35.209571   18074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-11003/.minikube/bin
	W0108 20:09:35.209679   18074 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17907-11003/.minikube/config/config.json: open /home/jenkins/minikube-integration/17907-11003/.minikube/config/config.json: no such file or directory
	I0108 20:09:35.210088   18074 out.go:303] Setting JSON to true
	I0108 20:09:35.210926   18074 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3101,"bootTime":1704741474,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:09:35.210989   18074 start.go:138] virtualization: kvm guest
	I0108 20:09:35.213696   18074 out.go:97] [download-only-529405] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:09:35.215705   18074 out.go:169] MINIKUBE_LOCATION=17907
	I0108 20:09:35.213909   18074 notify.go:220] Checking for updates...
	I0108 20:09:35.219767   18074 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:09:35.221737   18074 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17907-11003/kubeconfig
	I0108 20:09:35.223923   18074 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-11003/.minikube
	I0108 20:09:35.225880   18074 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0108 20:09:35.229375   18074 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 20:09:35.229879   18074 config.go:182] Loaded profile config "download-only-529405": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W0108 20:09:35.229938   18074 start.go:810] api.Load failed for download-only-529405: filestore "download-only-529405": Docker machine "download-only-529405" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 20:09:35.230048   18074 driver.go:392] Setting default libvirt URI to qemu:///system
	W0108 20:09:35.230091   18074 start.go:810] api.Load failed for download-only-529405: filestore "download-only-529405": Docker machine "download-only-529405" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 20:09:35.252668   18074 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:09:35.252782   18074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:09:35.317641   18074 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2024-01-08 20:09:35.307469246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:09:35.317777   18074 docker.go:295] overlay module found
	I0108 20:09:35.320522   18074 out.go:97] Using the docker driver based on existing profile
	I0108 20:09:35.320567   18074 start.go:298] selected driver: docker
	I0108 20:09:35.320574   18074 start.go:902] validating driver "docker" against &{Name:download-only-529405 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-529405 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:09:35.320777   18074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:09:35.382939   18074 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2024-01-08 20:09:35.372698329 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:09:35.383785   18074 cni.go:84] Creating CNI manager for ""
	I0108 20:09:35.383811   18074 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 20:09:35.383829   18074 start_flags.go:323] config:
	{Name:download-only-529405 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-529405 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0
s GPUs:}
	I0108 20:09:35.386772   18074 out.go:97] Starting control plane node download-only-529405 in cluster download-only-529405
	I0108 20:09:35.386807   18074 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 20:09:35.388728   18074 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I0108 20:09:35.388769   18074 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 20:09:35.388884   18074 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0108 20:09:35.405634   18074 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0108 20:09:35.405802   18074 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I0108 20:09:35.405827   18074 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory, skipping pull
	I0108 20:09:35.405846   18074 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in cache, skipping pull
	I0108 20:09:35.405864   18074 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I0108 20:09:35.418284   18074 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0108 20:09:35.418313   18074 cache.go:56] Caching tarball of preloaded images
	I0108 20:09:35.418461   18074 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 20:09:35.421129   18074 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0108 20:09:35.421166   18074 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0108 20:09:35.460028   18074 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:2e182f4d7475b49e22eaf15ea22c281b -> /home/jenkins/minikube-integration/17907-11003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0108 20:09:42.128444   18074 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0108 20:09:42.128602   18074 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17907-11003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0108 20:09:42.988560   18074 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0108 20:09:42.988777   18074 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/download-only-529405/config.json ...
	I0108 20:09:42.989103   18074 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 20:09:42.989348   18074 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17907-11003/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-529405"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-529405
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.38s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-615567 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-615567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-615567
--- PASS: TestDownloadOnlyKic (1.38s)

                                                
                                    
x
+
TestBinaryMirror (0.82s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-930872 --alsologtostderr --binary-mirror http://127.0.0.1:36653 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-930872" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-930872
--- PASS: TestBinaryMirror (0.82s)

                                                
                                    
x
+
TestOffline (57.57s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-940411 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-940411 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (51.56498478s)
helpers_test.go:175: Cleaning up "offline-crio-940411" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-940411
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-940411: (6.004582528s)
--- PASS: TestOffline (57.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-793365
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-793365: exit status 85 (83.340091ms)

                                                
                                                
-- stdout --
	* Profile "addons-793365" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-793365"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-793365
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-793365: exit status 85 (81.995812ms)

                                                
                                                
-- stdout --
	* Profile "addons-793365" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-793365"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (146.33s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-793365 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-793365 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m26.33366436s)
--- PASS: TestAddons/Setup (146.33s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 14.448442ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-xpwjx" [44a1b58b-e22d-43ec-aac2-050524a7e3e5] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005931211s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-pr7pt" [0eff1f23-2361-4088-8ccf-5c2e1e86e104] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005839257s
addons_test.go:340: (dbg) Run:  kubectl --context addons-793365 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-793365 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-793365 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.742846117s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-793365 ip
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-793365 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.65s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.74s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-pl7lv" [71268fb0-cac8-470d-9805-35599051ae4e] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004914501s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-793365
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-793365: (5.729286697s)
--- PASS: TestAddons/parallel/InspektorGadget (10.74s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 5.219265ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-7n5fz" [b8ce805d-d383-4dbe-a9c3-6e15a815be3d] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005633024s
addons_test.go:415: (dbg) Run:  kubectl --context addons-793365 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-793365 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.73s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.4s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 14.095141ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-27plq" [6100538b-a718-4be8-9ca1-bb2d3bfa9ce5] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.005024661s
addons_test.go:473: (dbg) Run:  kubectl --context addons-793365 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-793365 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.777261113s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-793365 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.40s)

                                                
                                    
x
+
TestAddons/parallel/CSI (78.85s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 14.318962ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-793365 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-793365 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [53bd4211-82ab-455c-a405-14dd5172278a] Pending
helpers_test.go:344: "task-pv-pod" [53bd4211-82ab-455c-a405-14dd5172278a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [53bd4211-82ab-455c-a405-14dd5172278a] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.0052926s
addons_test.go:584: (dbg) Run:  kubectl --context addons-793365 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-793365 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-793365 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-793365 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-793365 delete pod task-pv-pod: (1.560922724s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-793365 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-793365 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-793365 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [bac1b30e-c705-4a28-a451-a8d74f0191cc] Pending
helpers_test.go:344: "task-pv-pod-restore" [bac1b30e-c705-4a28-a451-a8d74f0191cc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [bac1b30e-c705-4a28-a451-a8d74f0191cc] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005024139s
addons_test.go:626: (dbg) Run:  kubectl --context addons-793365 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-793365 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-793365 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-793365 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-793365 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.682948894s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-793365 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (78.85s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-793365 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-793365 --alsologtostderr -v=1: (1.284491801s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-s294f" [7e6ee93a-b32a-4071-974f-d8fb02749ec5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-s294f" [7e6ee93a-b32a-4071-974f-d8fb02749ec5] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-s294f" [7e6ee93a-b32a-4071-974f-d8fb02749ec5] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004882051s
--- PASS: TestAddons/parallel/Headlamp (13.29s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-grkk8" [e1de9c27-325a-415b-baaf-945362502440] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004370785s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-793365
--- PASS: TestAddons/parallel/CloudSpanner (6.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.19s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-793365 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-793365 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793365 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [42f4e5b1-5405-4d23-b049-11801a950f90] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [42f4e5b1-5405-4d23-b049-11801a950f90] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [42f4e5b1-5405-4d23-b049-11801a950f90] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004111774s
addons_test.go:891: (dbg) Run:  kubectl --context addons-793365 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-793365 ssh "cat /opt/local-path-provisioner/pvc-5e67be60-a644-4a8f-a6a7-599f5a45b0d3_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-793365 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-793365 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-793365 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-793365 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.168656359s)
--- PASS: TestAddons/parallel/LocalPath (54.19s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.99s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-gc69v" [abbcf562-fe67-4f2e-a81c-13b335b4c501] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005386425s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-793365
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.99s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-p59sb" [fd89127d-4a92-4848-8af4-3e629f394bb2] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004379598s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-793365 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-793365 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.36s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-793365
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-793365: (12.029658826s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-793365
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-793365
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-793365
--- PASS: TestAddons/StoppedEnableDisable (12.36s)

                                                
                                    
x
+
TestCertOptions (31.53s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-625863 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-625863 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (28.694406338s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-625863 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-625863 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-625863 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-625863" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-625863
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-625863: (2.111792295s)
--- PASS: TestCertOptions (31.53s)

                                                
                                    
x
+
TestCertExpiration (228.39s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-378390 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-378390 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (29.424631482s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-378390 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-378390 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (16.196208121s)
helpers_test.go:175: Cleaning up "cert-expiration-378390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-378390
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-378390: (2.771343912s)
--- PASS: TestCertExpiration (228.39s)

                                                
                                    
x
+
TestForceSystemdFlag (29.39s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-980544 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0108 20:43:13.023124   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/functional-563235/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-980544 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.760548861s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-980544 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-980544" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-980544
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-980544: (3.295614309s)
--- PASS: TestForceSystemdFlag (29.39s)

                                                
                                    
x
+
TestForceSystemdEnv (32.59s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-007731 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-007731 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (30.025687977s)
helpers_test.go:175: Cleaning up "force-systemd-env-007731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-007731
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-007731: (2.563956429s)
--- PASS: TestForceSystemdEnv (32.59s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.43s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.43s)

                                                
                                    
x
+
TestErrorSpam/setup (25.85s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-303099 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-303099 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-303099 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-303099 --driver=docker  --container-runtime=crio: (25.852421826s)
--- PASS: TestErrorSpam/setup (25.85s)

                                                
                                    
x
+
TestErrorSpam/start (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-303099 --log_dir /tmp/nospam-303099 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-303099 --log_dir /tmp/nospam-303099 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-303099 --log_dir /tmp/nospam-303099 start --dry-run
--- PASS: TestErrorSpam/start (0.73s)

                                                
                                    
x
+
TestErrorSpam/status (1.02s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-303099 --log_dir /tmp/nospam-303099 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-303099 --log_dir /tmp/nospam-303099 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-303099 --log_dir /tmp/nospam-303099 status
--- PASS: TestErrorSpam/status (1.02s)

                                                
                                    
x
+
TestErrorSpam/pause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-303099 --log_dir /tmp/nospam-303099 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-303099 --log_dir /tmp/nospam-303099 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-303099 --log_dir /tmp/nospam-303099 pause
--- PASS: TestErrorSpam/pause (1.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-303099 --log_dir /tmp/nospam-303099 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-303099 --log_dir /tmp/nospam-303099 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-303099 --log_dir /tmp/nospam-303099 unpause
--- PASS: TestErrorSpam/unpause (1.70s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-303099 --log_dir /tmp/nospam-303099 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-303099 --log_dir /tmp/nospam-303099 stop: (1.240467104s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-303099 --log_dir /tmp/nospam-303099 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-303099 --log_dir /tmp/nospam-303099 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17907-11003/.minikube/files/etc/test/nested/copy/17761/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.18s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-563235 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-563235 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (38.178902513s)
--- PASS: TestFunctional/serial/StartWithProxy (38.18s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.58s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-563235 --alsologtostderr -v=8
E0108 20:17:17.009269   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
E0108 20:17:17.015172   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
E0108 20:17:17.025592   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
E0108 20:17:17.046025   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
E0108 20:17:17.086386   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
E0108 20:17:17.167256   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
E0108 20:17:17.327770   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
E0108 20:17:17.648483   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
E0108 20:17:18.288869   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
E0108 20:17:19.569176   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
E0108 20:17:22.130108   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-563235 --alsologtostderr -v=8: (35.581467429s)
functional_test.go:659: soft start took 35.58219779s for "functional-563235" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.58s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-563235 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 cache add registry.k8s.io/pause:3.1
E0108 20:17:27.250989   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-563235 cache add registry.k8s.io/pause:3.1: (1.05065942s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-563235 cache add registry.k8s.io/pause:3.3: (1.086636561s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-563235 cache add registry.k8s.io/pause:latest: (1.000818458s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-563235 /tmp/TestFunctionalserialCacheCmdcacheadd_local1704815568/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 cache add minikube-local-cache-test:functional-563235
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 cache delete minikube-local-cache-test:functional-563235
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-563235
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-563235 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (314.406239ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 kubectl -- --context functional-563235 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-563235 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.66s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-563235 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0108 20:17:37.492016   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
E0108 20:17:57.972830   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-563235 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.654706996s)
functional_test.go:757: restart took 31.654867927s for "functional-563235" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (31.66s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-563235 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-563235 logs: (1.566800235s)
--- PASS: TestFunctional/serial/LogsCmd (1.57s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 logs --file /tmp/TestFunctionalserialLogsFileCmd3355360307/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-563235 logs --file /tmp/TestFunctionalserialLogsFileCmd3355360307/001/logs.txt: (1.579750574s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.58s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.84s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-563235 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-563235
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-563235: exit status 115 (389.662387ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31997 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-563235 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.84s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-563235 config get cpus: exit status 14 (149.841552ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-563235 config get cpus: exit status 14 (91.200484ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-563235 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-563235 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 56672: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.87s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-563235 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-563235 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (260.816521ms)

                                                
                                                
-- stdout --
	* [functional-563235] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-11003/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-11003/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:18:48.488051   55428 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:18:48.488196   55428 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:18:48.488206   55428 out.go:309] Setting ErrFile to fd 2...
	I0108 20:18:48.488214   55428 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:18:48.488555   55428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-11003/.minikube/bin
	I0108 20:18:48.489351   55428 out.go:303] Setting JSON to false
	I0108 20:18:48.496939   55428 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3655,"bootTime":1704741474,"procs":554,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:18:48.497095   55428 start.go:138] virtualization: kvm guest
	I0108 20:18:48.500176   55428 out.go:177] * [functional-563235] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:18:48.501949   55428 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:18:48.502254   55428 notify.go:220] Checking for updates...
	I0108 20:18:48.505384   55428 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:18:48.507367   55428 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-11003/kubeconfig
	I0108 20:18:48.508977   55428 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-11003/.minikube
	I0108 20:18:48.510579   55428 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 20:18:48.512017   55428 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:18:48.513922   55428 config.go:182] Loaded profile config "functional-563235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:18:48.514553   55428 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:18:48.549497   55428 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:18:48.549687   55428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:18:48.623778   55428 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2024-01-08 20:18:48.612359982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:18:48.623915   55428 docker.go:295] overlay module found
	I0108 20:18:48.626322   55428 out.go:177] * Using the docker driver based on existing profile
	I0108 20:18:48.628175   55428 start.go:298] selected driver: docker
	I0108 20:18:48.628206   55428 start.go:902] validating driver "docker" against &{Name:functional-563235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-563235 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:18:48.628353   55428 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:18:48.631529   55428 out.go:177] 
	W0108 20:18:48.633338   55428 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0108 20:18:48.635069   55428 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-563235 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-563235 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-563235 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (204.508799ms)

                                                
                                                
-- stdout --
	* [functional-563235] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-11003/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-11003/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:18:48.977735   55805 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:18:48.977885   55805 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:18:48.977894   55805 out.go:309] Setting ErrFile to fd 2...
	I0108 20:18:48.977898   55805 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:18:48.978252   55805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-11003/.minikube/bin
	I0108 20:18:48.978952   55805 out.go:303] Setting JSON to false
	I0108 20:18:48.980588   55805 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3655,"bootTime":1704741474,"procs":549,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:18:48.980676   55805 start.go:138] virtualization: kvm guest
	I0108 20:18:48.983580   55805 out.go:177] * [functional-563235] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0108 20:18:48.985689   55805 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:18:48.985690   55805 notify.go:220] Checking for updates...
	I0108 20:18:48.987378   55805 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:18:48.989108   55805 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-11003/kubeconfig
	I0108 20:18:48.990907   55805 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-11003/.minikube
	I0108 20:18:48.992457   55805 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 20:18:48.993998   55805 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:18:48.996164   55805 config.go:182] Loaded profile config "functional-563235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:18:48.996778   55805 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:18:49.025899   55805 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:18:49.026032   55805 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:18:49.097760   55805 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2024-01-08 20:18:49.083408349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:18:49.097890   55805 docker.go:295] overlay module found
	I0108 20:18:49.102153   55805 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0108 20:18:49.103708   55805 start.go:298] selected driver: docker
	I0108 20:18:49.103735   55805 start.go:902] validating driver "docker" against &{Name:functional-563235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-563235 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:18:49.103903   55805 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:18:49.107389   55805 out.go:177] 
	W0108 20:18:49.109488   55805 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0108 20:18:49.111428   55805 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-563235 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-563235 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-fvmn6" [27556b6a-b019-4e83-ab86-e097d80200a5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-fvmn6" [27556b6a-b019-4e83-ab86-e097d80200a5] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.006150763s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:32302
functional_test.go:1674: http://192.168.49.2:32302: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-fvmn6

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32302
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (34.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e014b367-fcd5-4aea-b423-f845fee3457c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.037866272s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-563235 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-563235 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-563235 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-563235 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c1808a1b-999b-4dc8-bc8e-b08452d7a2dc] Pending
helpers_test.go:344: "sp-pod" [c1808a1b-999b-4dc8-bc8e-b08452d7a2dc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c1808a1b-999b-4dc8-bc8e-b08452d7a2dc] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.005304554s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-563235 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-563235 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-563235 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3abd5f45-84da-4163-932c-3dfca567fcae] Pending
helpers_test.go:344: "sp-pod" [3abd5f45-84da-4163-932c-3dfca567fcae] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3abd5f45-84da-4163-932c-3dfca567fcae] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.008531455s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-563235 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (34.83s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh -n functional-563235 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 cp functional-563235:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3741730131/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh -n functional-563235 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh -n functional-563235 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-563235 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-cjjpw" [2d5145b7-9894-4a0e-b57e-2a90d538bf1b] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-cjjpw" [2d5145b7-9894-4a0e-b57e-2a90d538bf1b] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.011192287s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-563235 exec mysql-859648c796-cjjpw -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-563235 exec mysql-859648c796-cjjpw -- mysql -ppassword -e "show databases;": exit status 1 (284.345897ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-563235 exec mysql-859648c796-cjjpw -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-563235 exec mysql-859648c796-cjjpw -- mysql -ppassword -e "show databases;": exit status 1 (227.817017ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-563235 exec mysql-859648c796-cjjpw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.61s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/17761/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh "sudo cat /etc/test/nested/copy/17761/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/17761.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh "sudo cat /etc/ssl/certs/17761.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/17761.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh "sudo cat /usr/share/ca-certificates/17761.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/177612.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh "sudo cat /etc/ssl/certs/177612.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/177612.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh "sudo cat /usr/share/ca-certificates/177612.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-563235 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-563235 ssh "sudo systemctl is-active docker": exit status 1 (322.074061ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-563235 ssh "sudo systemctl is-active containerd": exit status 1 (327.163088ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-563235 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-563235 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-b75qh" [61936be4-d7ba-41cc-86fb-52547635240e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-b75qh" [61936be4-d7ba-41cc-86fb-52547635240e] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004660219s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-563235 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-563235 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-563235 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 49887: os: process already finished
helpers_test.go:502: unable to terminate pid 49615: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-563235 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-563235 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-563235 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [64339c71-4119-4e8a-9cd9-e3fc190389f2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [64339c71-4119-4e8a-9cd9-e3fc190389f2] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.004559548s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 service list -o json
functional_test.go:1493: Took "570.36016ms" to run "out/minikube-linux-amd64 -p functional-563235 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31824
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31824
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-563235 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.88.52 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-563235 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-linux-amd64 -p functional-563235 version -o=json --components: (1.373656139s)
--- PASS: TestFunctional/parallel/Version/components (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-563235 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-563235
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-563235 image ls --format short --alsologtostderr:
I0108 20:18:53.552573   57156 out.go:296] Setting OutFile to fd 1 ...
I0108 20:18:53.552818   57156 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:18:53.552832   57156 out.go:309] Setting ErrFile to fd 2...
I0108 20:18:53.552840   57156 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:18:53.553248   57156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-11003/.minikube/bin
I0108 20:18:53.554094   57156 config.go:182] Loaded profile config "functional-563235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:18:53.554298   57156 config.go:182] Loaded profile config "functional-563235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:18:53.555296   57156 cli_runner.go:164] Run: docker container inspect functional-563235 --format={{.State.Status}}
I0108 20:18:53.584490   57156 ssh_runner.go:195] Run: systemctl --version
I0108 20:18:53.584536   57156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-563235
I0108 20:18:53.610271   57156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/functional-563235/id_rsa Username:docker}
I0108 20:18:53.709441   57156 ssh_runner.go:195] Run: sudo crictl images --output json
W0108 20:18:53.807423   57156 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 2e100127-65cc-42c1-ad29-361914087d8a
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-563235 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| docker.io/library/nginx                 | alpine             | 529b5644c430c | 44.4MB |
| docker.io/library/nginx                 | latest             | d453dd892d935 | 191MB  |
| gcr.io/google-containers/addon-resizer  | functional-563235  | ffd4cfbbe753e | 34.1MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-563235 image ls --format table --alsologtostderr:
I0108 20:18:53.909235   57332 out.go:296] Setting OutFile to fd 1 ...
I0108 20:18:53.909394   57332 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:18:53.909402   57332 out.go:309] Setting ErrFile to fd 2...
I0108 20:18:53.909408   57332 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:18:53.909788   57332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-11003/.minikube/bin
I0108 20:18:53.910960   57332 config.go:182] Loaded profile config "functional-563235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:18:53.911139   57332 config.go:182] Loaded profile config "functional-563235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:18:53.911909   57332 cli_runner.go:164] Run: docker container inspect functional-563235 --format={{.State.Status}}
I0108 20:18:53.938345   57332 ssh_runner.go:195] Run: systemctl --version
I0108 20:18:53.938426   57332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-563235
I0108 20:18:53.964090   57332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/functional-563235/id_rsa Username:docker}
I0108 20:18:54.088771   57332 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-563235 image ls --format json --alsologtostderr:
[{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff","repoDigests":["docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686","docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44405005"},{"id":"d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9","repoDigests":["docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026","docker.io/library/nginx@sha256:9784f7985f6fba493ba30fb68419f50484fee8faaf677216cb95826f8491d2e9"],"repoTa
gs":["docker.io/library/nginx:latest"],"size":"190867606"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4
a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-563235"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-
provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"83f6cc407eed88d21
4aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a87894
9031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-563235 image ls --format json --alsologtostderr:
I0108 20:18:53.897250   57326 out.go:296] Setting OutFile to fd 1 ...
I0108 20:18:53.897468   57326 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:18:53.897482   57326 out.go:309] Setting ErrFile to fd 2...
I0108 20:18:53.897491   57326 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:18:53.897824   57326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-11003/.minikube/bin
I0108 20:18:53.898776   57326 config.go:182] Loaded profile config "functional-563235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:18:53.898917   57326 config.go:182] Loaded profile config "functional-563235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:18:53.899547   57326 cli_runner.go:164] Run: docker container inspect functional-563235 --format={{.State.Status}}
I0108 20:18:53.923617   57326 ssh_runner.go:195] Run: systemctl --version
I0108 20:18:53.923679   57326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-563235
I0108 20:18:53.948382   57326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/functional-563235/id_rsa Username:docker}
I0108 20:18:54.040925   57326 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-563235 image ls --format yaml --alsologtostderr:
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: 529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff
repoDigests:
- docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686
- docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59
repoTags:
- docker.io/library/nginx:alpine
size: "44405005"
- id: d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9
repoDigests:
- docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026
- docker.io/library/nginx@sha256:9784f7985f6fba493ba30fb68419f50484fee8faaf677216cb95826f8491d2e9
repoTags:
- docker.io/library/nginx:latest
size: "190867606"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-563235
size: "34114467"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-563235 image ls --format yaml --alsologtostderr:
I0108 20:18:53.562294   57157 out.go:296] Setting OutFile to fd 1 ...
I0108 20:18:53.562495   57157 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:18:53.562528   57157 out.go:309] Setting ErrFile to fd 2...
I0108 20:18:53.562541   57157 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:18:53.562798   57157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-11003/.minikube/bin
I0108 20:18:53.563560   57157 config.go:182] Loaded profile config "functional-563235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:18:53.563713   57157 config.go:182] Loaded profile config "functional-563235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:18:53.564195   57157 cli_runner.go:164] Run: docker container inspect functional-563235 --format={{.State.Status}}
I0108 20:18:53.596999   57157 ssh_runner.go:195] Run: systemctl --version
I0108 20:18:53.597087   57157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-563235
I0108 20:18:53.620446   57157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/functional-563235/id_rsa Username:docker}
I0108 20:18:53.712625   57157 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-563235 ssh pgrep buildkitd: exit status 1 (339.421981ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 image build -t localhost/my-image:functional-563235 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-563235 image build -t localhost/my-image:functional-563235 testdata/build --alsologtostderr: (3.748018985s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-563235 image build -t localhost/my-image:functional-563235 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 37a322f96d6
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-563235
--> 743c8d422f4
Successfully tagged localhost/my-image:functional-563235
743c8d422f480af35fb0e1fa54ccf99ad45be0add0a289d481ed19eb0e64baa8
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-563235 image build -t localhost/my-image:functional-563235 testdata/build --alsologtostderr:
I0108 20:18:53.893111   57318 out.go:296] Setting OutFile to fd 1 ...
I0108 20:18:53.893353   57318 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:18:53.893367   57318 out.go:309] Setting ErrFile to fd 2...
I0108 20:18:53.893375   57318 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:18:53.893733   57318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-11003/.minikube/bin
I0108 20:18:53.894747   57318 config.go:182] Loaded profile config "functional-563235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:18:53.895549   57318 config.go:182] Loaded profile config "functional-563235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:18:53.896181   57318 cli_runner.go:164] Run: docker container inspect functional-563235 --format={{.State.Status}}
I0108 20:18:53.925142   57318 ssh_runner.go:195] Run: systemctl --version
I0108 20:18:53.925208   57318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-563235
I0108 20:18:53.948193   57318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/functional-563235/id_rsa Username:docker}
I0108 20:18:54.044455   57318 build_images.go:151] Building image from path: /tmp/build.3073425574.tar
I0108 20:18:54.044571   57318 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0108 20:18:54.054178   57318 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3073425574.tar
I0108 20:18:54.090769   57318 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3073425574.tar: stat -c "%s %y" /var/lib/minikube/build/build.3073425574.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3073425574.tar': No such file or directory
I0108 20:18:54.090813   57318 ssh_runner.go:362] scp /tmp/build.3073425574.tar --> /var/lib/minikube/build/build.3073425574.tar (3072 bytes)
I0108 20:18:54.119608   57318 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3073425574
I0108 20:18:54.201567   57318 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3073425574 -xf /var/lib/minikube/build/build.3073425574.tar
I0108 20:18:54.212781   57318 crio.go:297] Building image: /var/lib/minikube/build/build.3073425574
I0108 20:18:54.212854   57318 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-563235 /var/lib/minikube/build/build.3073425574 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0108 20:18:57.524259   57318 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-563235 /var/lib/minikube/build/build.3073425574 --cgroup-manager=cgroupfs: (3.311379515s)
I0108 20:18:57.524328   57318 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3073425574
I0108 20:18:57.533956   57318 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3073425574.tar
I0108 20:18:57.543928   57318 build_images.go:207] Built localhost/my-image:functional-563235 from /tmp/build.3073425574.tar
I0108 20:18:57.543984   57318 build_images.go:123] succeeded building to: functional-563235
I0108 20:18:57.543990   57318 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 image ls
2024/01/08 20:18:58 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-563235
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 image load --daemon gcr.io/google-containers/addon-resizer:functional-563235 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-563235 image load --daemon gcr.io/google-containers/addon-resizer:functional-563235 --alsologtostderr: (5.672727897s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.16s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (17.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-563235 /tmp/TestFunctionalparallelMountCmdany-port1904014736/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1704745108685524059" to /tmp/TestFunctionalparallelMountCmdany-port1904014736/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1704745108685524059" to /tmp/TestFunctionalparallelMountCmdany-port1904014736/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1704745108685524059" to /tmp/TestFunctionalparallelMountCmdany-port1904014736/001/test-1704745108685524059
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-563235 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (361.6409ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan  8 20:18 created-by-test
-rw-r--r-- 1 docker docker 24 Jan  8 20:18 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan  8 20:18 test-1704745108685524059
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh cat /mount-9p/test-1704745108685524059
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-563235 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [5171a330-9c62-42d4-b0ad-ad1f48c4d994] Pending
helpers_test.go:344: "busybox-mount" [5171a330-9c62-42d4-b0ad-ad1f48c4d994] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [5171a330-9c62-42d4-b0ad-ad1f48c4d994] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [5171a330-9c62-42d4-b0ad-ad1f48c4d994] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 14.006790609s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-563235 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-563235 /tmp/TestFunctionalparallelMountCmdany-port1904014736/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (17.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "442.925503ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "117.25009ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "338.754974ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "77.569823ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (7.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 image load --daemon gcr.io/google-containers/addon-resizer:functional-563235 --alsologtostderr
E0108 20:18:38.933178   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-563235 image load --daemon gcr.io/google-containers/addon-resizer:functional-563235 --alsologtostderr: (7.328932602s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (7.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-563235
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 image load --daemon gcr.io/google-containers/addon-resizer:functional-563235 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-563235 image load --daemon gcr.io/google-containers/addon-resizer:functional-563235 --alsologtostderr: (5.965517908s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-563235 /tmp/TestFunctionalparallelMountCmdspecific-port609153744/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-563235 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (331.28796ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-563235 /tmp/TestFunctionalparallelMountCmdspecific-port609153744/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-563235 ssh "sudo umount -f /mount-9p": exit status 1 (574.645377ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-563235 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-563235 /tmp/TestFunctionalparallelMountCmdspecific-port609153744/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-563235 /tmp/TestFunctionalparallelMountCmdVerifyCleanup721812902/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-563235 /tmp/TestFunctionalparallelMountCmdVerifyCleanup721812902/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-563235 /tmp/TestFunctionalparallelMountCmdVerifyCleanup721812902/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-563235 ssh "findmnt -T" /mount1: exit status 1 (531.341487ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-563235 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-563235 /tmp/TestFunctionalparallelMountCmdVerifyCleanup721812902/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-563235 /tmp/TestFunctionalparallelMountCmdVerifyCleanup721812902/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-563235 /tmp/TestFunctionalparallelMountCmdVerifyCleanup721812902/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 image save gcr.io/google-containers/addon-resizer:functional-563235 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 image rm gcr.io/google-containers/addon-resizer:functional-563235 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-563235 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.978028808s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-563235
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-563235 image save --daemon gcr.io/google-containers/addon-resizer:functional-563235 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-563235
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-563235
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-563235
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-563235
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (77.81s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-592184 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0108 20:20:00.853490   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-592184 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m17.806696812s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (77.81s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.96s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-592184 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-592184 addons enable ingress --alsologtostderr -v=5: (10.955225239s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.96s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-592184 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.60s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.9s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-836344 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0108 20:23:53.985377   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/functional-563235/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-836344 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (40.897232592s)
--- PASS: TestJSONOutput/start/Command (40.90s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-836344 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-836344 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-836344 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-836344 --output=json --user=testUser: (5.858508103s)
--- PASS: TestJSONOutput/stop/Command (5.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-195687 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-195687 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (96.613969ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"37d90a27-040d-47ac-bc18-c780ec170026","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-195687] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"842ae716-c283-4b94-8947-b15e19af9588","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17907"}}
	{"specversion":"1.0","id":"5ebcbbee-90ed-4537-a7e4-202215241603","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3697a901-1040-44ca-b188-6979dd1fb85e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17907-11003/kubeconfig"}}
	{"specversion":"1.0","id":"9f1afe15-9805-4138-a41a-4e49230a1377","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-11003/.minikube"}}
	{"specversion":"1.0","id":"21d1940d-927d-40f2-bd7a-62f203a30cfd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0eb0f50d-546b-428c-86f4-9a487073cd60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"58189127-0e24-44af-b590-a66ea08ac6b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-195687" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-195687
--- PASS: TestErrorJSONOutput (0.27s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (32.57s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-676301 --network=
E0108 20:24:34.945623   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/functional-563235/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-676301 --network=: (30.510411378s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-676301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-676301
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-676301: (2.042127056s)
--- PASS: TestKicCustomNetwork/create_custom_network (32.57s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (28.33s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-579742 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-579742 --network=bridge: (26.348172704s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-579742" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-579742
E0108 20:25:32.024638   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt: no such file or directory
E0108 20:25:32.029948   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt: no such file or directory
E0108 20:25:32.040445   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt: no such file or directory
E0108 20:25:32.060851   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt: no such file or directory
E0108 20:25:32.101266   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt: no such file or directory
E0108 20:25:32.182235   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt: no such file or directory
E0108 20:25:32.342663   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt: no such file or directory
E0108 20:25:32.662851   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-579742: (1.963461969s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (28.33s)

                                                
                                    
x
+
TestKicExistingNetwork (29.65s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-762323 --network=existing-network
E0108 20:25:33.303189   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt: no such file or directory
E0108 20:25:34.583914   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt: no such file or directory
E0108 20:25:37.144297   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt: no such file or directory
E0108 20:25:42.264707   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt: no such file or directory
E0108 20:25:52.505687   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt: no such file or directory
E0108 20:25:56.867620   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/functional-563235/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-762323 --network=existing-network: (27.433268766s)
helpers_test.go:175: Cleaning up "existing-network-762323" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-762323
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-762323: (2.058907882s)
--- PASS: TestKicExistingNetwork (29.65s)

                                                
                                    
x
+
TestKicCustomSubnet (29.48s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-642054 --subnet=192.168.60.0/24
E0108 20:26:12.985939   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-642054 --subnet=192.168.60.0/24: (27.25239606s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-642054 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-642054" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-642054
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-642054: (2.200011885s)
--- PASS: TestKicCustomSubnet (29.48s)

                                                
                                    
x
+
TestKicStaticIP (29.42s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-888867 --static-ip=192.168.200.200
E0108 20:26:53.946801   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-888867 --static-ip=192.168.200.200: (27.110867693s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-888867 ip
helpers_test.go:175: Cleaning up "static-ip-888867" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-888867
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-888867: (2.143661339s)
--- PASS: TestKicStaticIP (29.42s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (54.48s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-185279 --driver=docker  --container-runtime=crio
E0108 20:27:17.008467   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-185279 --driver=docker  --container-runtime=crio: (25.489548723s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-189010 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-189010 --driver=docker  --container-runtime=crio: (23.535674988s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-185279
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-189010
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-189010" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-189010
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-189010: (1.984045667s)
helpers_test.go:175: Cleaning up "first-185279" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-185279
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-185279: (2.335352256s)
--- PASS: TestMinikubeProfile (54.48s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.84s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-783339 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-783339 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.834821052s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-783339 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.53s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-800737 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-800737 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.532580066s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-800737 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-783339 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-783339 --alsologtostderr -v=5: (1.71126706s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-800737 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-800737
E0108 20:28:13.023022   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/functional-563235/client.crt: no such file or directory
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-800737: (1.246465051s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.35s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-800737
E0108 20:28:15.868572   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-800737: (6.350344035s)
--- PASS: TestMountStart/serial/RestartStopped (7.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-800737 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (135.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-209824 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0108 20:28:40.707964   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/functional-563235/client.crt: no such file or directory
E0108 20:30:32.024784   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-209824 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m15.091240753s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (135.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209824 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209824 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-209824 -- rollout status deployment/busybox: (2.043409847s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209824 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209824 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209824 -- exec busybox-5bc68d56bd-6c6nv -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209824 -- exec busybox-5bc68d56bd-v8fbl -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209824 -- exec busybox-5bc68d56bd-6c6nv -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209824 -- exec busybox-5bc68d56bd-v8fbl -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209824 -- exec busybox-5bc68d56bd-6c6nv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-209824 -- exec busybox-5bc68d56bd-v8fbl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (21.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-209824 -v 3 --alsologtostderr
E0108 20:30:59.709002   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-209824 -v 3 --alsologtostderr: (20.601416918s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (21.26s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-209824 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 cp testdata/cp-test.txt multinode-209824:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 ssh -n multinode-209824 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 cp multinode-209824:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2583116995/001/cp-test_multinode-209824.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 ssh -n multinode-209824 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 cp multinode-209824:/home/docker/cp-test.txt multinode-209824-m02:/home/docker/cp-test_multinode-209824_multinode-209824-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 ssh -n multinode-209824 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 ssh -n multinode-209824-m02 "sudo cat /home/docker/cp-test_multinode-209824_multinode-209824-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 cp multinode-209824:/home/docker/cp-test.txt multinode-209824-m03:/home/docker/cp-test_multinode-209824_multinode-209824-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 ssh -n multinode-209824 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 ssh -n multinode-209824-m03 "sudo cat /home/docker/cp-test_multinode-209824_multinode-209824-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 cp testdata/cp-test.txt multinode-209824-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 ssh -n multinode-209824-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 cp multinode-209824-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2583116995/001/cp-test_multinode-209824-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 ssh -n multinode-209824-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 cp multinode-209824-m02:/home/docker/cp-test.txt multinode-209824:/home/docker/cp-test_multinode-209824-m02_multinode-209824.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 ssh -n multinode-209824-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 ssh -n multinode-209824 "sudo cat /home/docker/cp-test_multinode-209824-m02_multinode-209824.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 cp multinode-209824-m02:/home/docker/cp-test.txt multinode-209824-m03:/home/docker/cp-test_multinode-209824-m02_multinode-209824-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 ssh -n multinode-209824-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 ssh -n multinode-209824-m03 "sudo cat /home/docker/cp-test_multinode-209824-m02_multinode-209824-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 cp testdata/cp-test.txt multinode-209824-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 ssh -n multinode-209824-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 cp multinode-209824-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2583116995/001/cp-test_multinode-209824-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 ssh -n multinode-209824-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 cp multinode-209824-m03:/home/docker/cp-test.txt multinode-209824:/home/docker/cp-test_multinode-209824-m03_multinode-209824.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 ssh -n multinode-209824-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 ssh -n multinode-209824 "sudo cat /home/docker/cp-test_multinode-209824-m03_multinode-209824.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 cp multinode-209824-m03:/home/docker/cp-test.txt multinode-209824-m02:/home/docker/cp-test_multinode-209824-m03_multinode-209824-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 ssh -n multinode-209824-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 ssh -n multinode-209824-m02 "sudo cat /home/docker/cp-test_multinode-209824-m03_multinode-209824-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-209824 node stop m03: (1.229959062s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-209824 status: exit status 7 (522.985561ms)

                                                
                                                
-- stdout --
	multinode-209824
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-209824-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-209824-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-209824 status --alsologtostderr: exit status 7 (511.571992ms)

                                                
                                                
-- stdout --
	multinode-209824
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-209824-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-209824-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:31:20.893049  116966 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:31:20.893195  116966 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:31:20.893200  116966 out.go:309] Setting ErrFile to fd 2...
	I0108 20:31:20.893205  116966 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:31:20.893443  116966 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-11003/.minikube/bin
	I0108 20:31:20.893719  116966 out.go:303] Setting JSON to false
	I0108 20:31:20.893766  116966 mustload.go:65] Loading cluster: multinode-209824
	I0108 20:31:20.893951  116966 notify.go:220] Checking for updates...
	I0108 20:31:20.894259  116966 config.go:182] Loaded profile config "multinode-209824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:31:20.894275  116966 status.go:255] checking status of multinode-209824 ...
	I0108 20:31:20.894809  116966 cli_runner.go:164] Run: docker container inspect multinode-209824 --format={{.State.Status}}
	I0108 20:31:20.913746  116966 status.go:330] multinode-209824 host status = "Running" (err=<nil>)
	I0108 20:31:20.913790  116966 host.go:66] Checking if "multinode-209824" exists ...
	I0108 20:31:20.914072  116966 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-209824
	I0108 20:31:20.934065  116966 host.go:66] Checking if "multinode-209824" exists ...
	I0108 20:31:20.934473  116966 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:31:20.934543  116966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-209824
	I0108 20:31:20.952092  116966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/multinode-209824/id_rsa Username:docker}
	I0108 20:31:21.041200  116966 ssh_runner.go:195] Run: systemctl --version
	I0108 20:31:21.045758  116966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:31:21.058798  116966 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:31:21.115871  116966 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2024-01-08 20:31:21.106632678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:31:21.116568  116966 kubeconfig.go:92] found "multinode-209824" server: "https://192.168.58.2:8443"
	I0108 20:31:21.116606  116966 api_server.go:166] Checking apiserver status ...
	I0108 20:31:21.116663  116966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:31:21.127983  116966 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup
	I0108 20:31:21.137463  116966 api_server.go:182] apiserver freezer: "12:freezer:/docker/8507f1719a09c808280058bb0847a0bae4d1da9b371ca43d4d04d28ab47955c8/crio/crio-9beafc11f6d367522f765aafaea43646c3fe10722c3d6e75377010b5149a1019"
	I0108 20:31:21.137559  116966 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8507f1719a09c808280058bb0847a0bae4d1da9b371ca43d4d04d28ab47955c8/crio/crio-9beafc11f6d367522f765aafaea43646c3fe10722c3d6e75377010b5149a1019/freezer.state
	I0108 20:31:21.148190  116966 api_server.go:204] freezer state: "THAWED"
	I0108 20:31:21.148236  116966 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0108 20:31:21.152986  116966 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0108 20:31:21.153018  116966 status.go:421] multinode-209824 apiserver status = Running (err=<nil>)
	I0108 20:31:21.153028  116966 status.go:257] multinode-209824 status: &{Name:multinode-209824 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 20:31:21.153048  116966 status.go:255] checking status of multinode-209824-m02 ...
	I0108 20:31:21.153383  116966 cli_runner.go:164] Run: docker container inspect multinode-209824-m02 --format={{.State.Status}}
	I0108 20:31:21.171284  116966 status.go:330] multinode-209824-m02 host status = "Running" (err=<nil>)
	I0108 20:31:21.171306  116966 host.go:66] Checking if "multinode-209824-m02" exists ...
	I0108 20:31:21.171620  116966 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-209824-m02
	I0108 20:31:21.192549  116966 host.go:66] Checking if "multinode-209824-m02" exists ...
	I0108 20:31:21.192846  116966 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:31:21.192882  116966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-209824-m02
	I0108 20:31:21.212548  116966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17907-11003/.minikube/machines/multinode-209824-m02/id_rsa Username:docker}
	I0108 20:31:21.305290  116966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:31:21.318502  116966 status.go:257] multinode-209824-m02 status: &{Name:multinode-209824-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0108 20:31:21.318548  116966 status.go:255] checking status of multinode-209824-m03 ...
	I0108 20:31:21.318865  116966 cli_runner.go:164] Run: docker container inspect multinode-209824-m03 --format={{.State.Status}}
	I0108 20:31:21.335032  116966 status.go:330] multinode-209824-m03 host status = "Stopped" (err=<nil>)
	I0108 20:31:21.335061  116966 status.go:343] host is not running, skipping remaining checks
	I0108 20:31:21.335067  116966 status.go:257] multinode-209824-m03 status: &{Name:multinode-209824-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-209824 node start m03 --alsologtostderr: (10.401446697s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (119.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-209824
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-209824
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-209824: (25.082011195s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-209824 --wait=true -v=8 --alsologtostderr
E0108 20:32:17.009140   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
E0108 20:33:13.023066   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/functional-563235/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-209824 --wait=true -v=8 --alsologtostderr: (1m34.755248571s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-209824
--- PASS: TestMultiNode/serial/RestartKeepsNodes (119.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-209824 node delete m03: (4.238645047s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 stop
E0108 20:33:40.056125   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-209824 stop: (23.846466161s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-209824 status: exit status 7 (112.246029ms)

                                                
                                                
-- stdout --
	multinode-209824
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-209824-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-209824 status --alsologtostderr: exit status 7 (111.13621ms)

                                                
                                                
-- stdout --
	multinode-209824
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-209824-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:34:01.364253  127127 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:34:01.364357  127127 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:34:01.364365  127127 out.go:309] Setting ErrFile to fd 2...
	I0108 20:34:01.364369  127127 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:34:01.364584  127127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-11003/.minikube/bin
	I0108 20:34:01.364790  127127 out.go:303] Setting JSON to false
	I0108 20:34:01.364830  127127 mustload.go:65] Loading cluster: multinode-209824
	I0108 20:34:01.364928  127127 notify.go:220] Checking for updates...
	I0108 20:34:01.365214  127127 config.go:182] Loaded profile config "multinode-209824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:34:01.365226  127127 status.go:255] checking status of multinode-209824 ...
	I0108 20:34:01.365616  127127 cli_runner.go:164] Run: docker container inspect multinode-209824 --format={{.State.Status}}
	I0108 20:34:01.387970  127127 status.go:330] multinode-209824 host status = "Stopped" (err=<nil>)
	I0108 20:34:01.388034  127127 status.go:343] host is not running, skipping remaining checks
	I0108 20:34:01.388048  127127 status.go:257] multinode-209824 status: &{Name:multinode-209824 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 20:34:01.388101  127127 status.go:255] checking status of multinode-209824-m02 ...
	I0108 20:34:01.388492  127127 cli_runner.go:164] Run: docker container inspect multinode-209824-m02 --format={{.State.Status}}
	I0108 20:34:01.406471  127127 status.go:330] multinode-209824-m02 host status = "Stopped" (err=<nil>)
	I0108 20:34:01.406492  127127 status.go:343] host is not running, skipping remaining checks
	I0108 20:34:01.406497  127127 status.go:257] multinode-209824-m02 status: &{Name:multinode-209824-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (77.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-209824 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-209824 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m17.30154925s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-209824 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (77.97s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (28.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-209824
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-209824-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-209824-m02 --driver=docker  --container-runtime=crio: exit status 14 (104.238163ms)

                                                
                                                
-- stdout --
	* [multinode-209824-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-11003/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-11003/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-209824-m02' is duplicated with machine name 'multinode-209824-m02' in profile 'multinode-209824'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-209824-m03 --driver=docker  --container-runtime=crio
E0108 20:35:32.025143   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-209824-m03 --driver=docker  --container-runtime=crio: (26.17593079s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-209824
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-209824: exit status 80 (305.572211ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-209824
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-209824-m03 already exists in multinode-209824-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-209824-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-209824-m03: (1.986159565s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (28.65s)

                                                
                                    
x
+
TestPreload (141.21s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-274451 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0108 20:37:17.008624   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-274451 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m25.096295213s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-274451 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-274451
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-274451: (5.7636503s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-274451 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-274451 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (46.86500139s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-274451 image list
E0108 20:38:13.023445   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/functional-563235/client.crt: no such file or directory
helpers_test.go:175: Cleaning up "test-preload-274451" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-274451
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-274451: (2.403542064s)
--- PASS: TestPreload (141.21s)

                                                
                                    
x
+
TestScheduledStopUnix (103.1s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-252155 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-252155 --memory=2048 --driver=docker  --container-runtime=crio: (25.972012398s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-252155 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-252155 -n scheduled-stop-252155
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-252155 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-252155 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-252155 -n scheduled-stop-252155
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-252155
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-252155 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0108 20:39:36.070707   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/functional-563235/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-252155
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-252155: exit status 7 (92.344071ms)

                                                
                                                
-- stdout --
	scheduled-stop-252155
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-252155 -n scheduled-stop-252155
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-252155 -n scheduled-stop-252155: exit status 7 (90.046862ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-252155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-252155
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-252155: (5.489772645s)
--- PASS: TestScheduledStopUnix (103.10s)

                                                
                                    
x
+
TestInsufficientStorage (13.77s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-609280 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-609280 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.177288003s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"43b20651-1a69-41dd-818d-a732f89c32df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-609280] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5c7f52b4-c3a9-4366-9faf-2a066efaab78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17907"}}
	{"specversion":"1.0","id":"c6aa6dcf-acf4-48df-b840-f4290db7ffca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9be1293f-a0fe-4b4c-a31d-222d85142838","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17907-11003/kubeconfig"}}
	{"specversion":"1.0","id":"eeebecdd-9e38-4a99-a4a8-806bc30fa773","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-11003/.minikube"}}
	{"specversion":"1.0","id":"766f25cb-71bb-4815-b28d-67d3abd6f3f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a1fbc3d0-7e0d-4b8c-854e-ebc37a5b9fe7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a2ff565e-5140-4ee8-8b10-25b745821e6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c0914722-4832-481b-9baa-5ecb28453bdb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"4a887152-f33a-482d-ac6e-b82231734478","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7fe724e7-44c8-438e-8473-701c8058ae29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"d2f976e1-6f91-4882-9f39-d7eda83783a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-609280 in cluster insufficient-storage-609280","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fcb3fc2c-1de6-402e-a097-9fdc5d8c0884","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1703498848-17857 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"47003d56-83e1-4b6e-89d7-304243d09dfe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"3bce9cf1-35a9-4477-8fa8-3f2c7cfa8f66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-609280 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-609280 --output=json --layout=cluster: exit status 7 (303.412047ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-609280","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-609280","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 20:40:10.001202  148823 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-609280" does not appear in /home/jenkins/minikube-integration/17907-11003/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-609280 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-609280 --output=json --layout=cluster: exit status 7 (297.580798ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-609280","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-609280","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 20:40:10.299474  148910 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-609280" does not appear in /home/jenkins/minikube-integration/17907-11003/kubeconfig
	E0108 20:40:10.309819  148910 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/insufficient-storage-609280/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-609280" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-609280
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-609280: (1.987879558s)
--- PASS: TestInsufficientStorage (13.77s)

                                                
                                    
x
+
TestKubernetesUpgrade (360.17s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-960438 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-960438 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (57.619712808s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-960438
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-960438: (2.322857747s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-960438 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-960438 status --format={{.Host}}: exit status 7 (144.936318ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-960438 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-960438 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m30.984318661s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-960438 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-960438 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-960438 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (113.130522ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-960438] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-11003/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-11003/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-960438
	    minikube start -p kubernetes-upgrade-960438 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9604382 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-960438 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-960438 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-960438 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.608113156s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-960438" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-960438
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-960438: (2.302594226s)
--- PASS: TestKubernetesUpgrade (360.17s)

                                                
                                    
x
+
TestMissingContainerUpgrade (178.04s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.9.0.4098870903.exe start -p missing-upgrade-990871 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.9.0.4098870903.exe start -p missing-upgrade-990871 --memory=2200 --driver=docker  --container-runtime=crio: (1m40.472684172s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-990871
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-990871: (11.068667122s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-990871
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-990871 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-990871 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m3.705249894s)
helpers_test.go:175: Cleaning up "missing-upgrade-990871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-990871
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-990871: (2.239591436s)
--- PASS: TestMissingContainerUpgrade (178.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (9.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-140159 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-140159 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (288.639513ms)

                                                
                                                
-- stdout --
	* [false-140159] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-11003/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-11003/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:40:17.952760  150793 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:40:17.952894  150793 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:40:17.952906  150793 out.go:309] Setting ErrFile to fd 2...
	I0108 20:40:17.952911  150793 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:40:17.953196  150793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-11003/.minikube/bin
	I0108 20:40:17.953966  150793 out.go:303] Setting JSON to false
	I0108 20:40:17.955836  150793 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4944,"bootTime":1704741474,"procs":692,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:40:17.955950  150793 start.go:138] virtualization: kvm guest
	I0108 20:40:17.959280  150793 out.go:177] * [false-140159] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:40:17.961568  150793 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:40:17.961497  150793 notify.go:220] Checking for updates...
	I0108 20:40:17.963901  150793 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:40:17.965928  150793 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-11003/kubeconfig
	I0108 20:40:17.967918  150793 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-11003/.minikube
	I0108 20:40:17.980798  150793 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 20:40:17.982309  150793 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:40:17.984574  150793 config.go:182] Loaded profile config "kubernetes-upgrade-960438": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0108 20:40:17.984697  150793 config.go:182] Loaded profile config "missing-upgrade-990871": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0108 20:40:17.984857  150793 config.go:182] Loaded profile config "offline-crio-940411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:40:17.985019  150793 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:40:18.029423  150793 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 20:40:18.029594  150793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 20:40:18.123147  150793 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:74 SystemTime:2024-01-08 20:40:18.097443592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0108 20:40:18.123392  150793 docker.go:295] overlay module found
	I0108 20:40:18.126771  150793 out.go:177] * Using the docker driver based on user configuration
	I0108 20:40:18.128746  150793 start.go:298] selected driver: docker
	I0108 20:40:18.128780  150793 start.go:902] validating driver "docker" against <nil>
	I0108 20:40:18.128803  150793 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:40:18.131553  150793 out.go:177] 
	W0108 20:40:18.133899  150793 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0108 20:40:18.135520  150793 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-140159 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-140159

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-140159

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-140159

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-140159

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-140159

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-140159

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-140159

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-140159

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-140159

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-140159

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-140159

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-140159" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-140159" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-140159

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140159"

                                                
                                                
----------------------- debugLogs end: false-140159 [took: 8.51369057s] --------------------------------
helpers_test.go:175: Cleaning up "false-140159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-140159
--- PASS: TestNetworkPlugins/group/false (9.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-181266
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.63s)

                                                
                                    
x
+
TestPause/serial/Start (44.85s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-960229 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0108 20:42:17.008744   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-960229 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (44.849900037s)
--- PASS: TestPause/serial/Start (44.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-538487 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-538487 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (94.425715ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-538487] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-11003/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-11003/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (27.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-538487 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-538487 --driver=docker  --container-runtime=crio: (27.071933764s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-538487 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (27.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-538487 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-538487 --no-kubernetes --driver=docker  --container-runtime=crio: (5.975006423s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-538487 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-538487 status -o json: exit status 2 (343.78416ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-538487","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-538487
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-538487: (2.252514893s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.57s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.39s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-960229 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-960229 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.360774465s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-538487 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-538487 --no-kubernetes --driver=docker  --container-runtime=crio: (5.246991508s)
--- PASS: TestNoKubernetes/serial/Start (5.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-538487 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-538487 "sudo systemctl is-active --quiet service kubelet": exit status 1 (312.700223ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.807287726s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-538487
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-538487: (1.294857673s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-538487 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-538487 --driver=docker  --container-runtime=crio: (9.089106465s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-538487 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-538487 "sudo systemctl is-active --quiet service kubelet": exit status 1 (362.431642ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestPause/serial/Pause (0.98s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-960229 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.98s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-960229 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-960229 --output=json --layout=cluster: exit status 2 (462.797014ms)

                                                
                                                
-- stdout --
	{"Name":"pause-960229","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-960229","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.46s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-960229 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.91s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.94s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-960229 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.94s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.01s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-960229 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-960229 --alsologtostderr -v=5: (3.011095143s)
--- PASS: TestPause/serial/DeletePaused (3.01s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.77s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-960229
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-960229: exit status 1 (22.627063ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-960229: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (72.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-140159 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-140159 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m12.115864096s)
--- PASS: TestNetworkPlugins/group/auto/Start (72.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (70.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-140159 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-140159 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m10.898150365s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (70.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-140159 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-140159 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9ts77" [df7628d7-1b9d-4537-b691-0d12584fa0d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9ts77" [df7628d7-1b9d-4537-b691-0d12584fa0d6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.005509655s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-140159 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-140159 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-140159 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-h558m" [a3c831a0-aa8d-4dad-88dc-50a4da2ad8c8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006567995s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-140159 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-140159 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lqjn7" [f3e76007-2c78-492b-a649-0faa5fa17e3e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 20:45:32.025084   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-lqjn7" [f3e76007-2c78-492b-a649-0faa5fa17e3e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004262382s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-140159 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-140159 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-140159 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-140159 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-140159 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m7.578190188s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (66.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-140159 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-140159 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m6.12149658s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (66.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-140159 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-140159 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m20.191814464s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-cs4kc" [251f2e81-e0d7-4e90-bcb6-d501764b6717] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006114152s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-140159 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-140159 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-b4wpp" [ec415a07-f698-498e-8cd5-d5a341c6e82e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-b4wpp" [ec415a07-f698-498e-8cd5-d5a341c6e82e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005218556s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-140159 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-140159 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-140159 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-140159 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-140159 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-t4vkb" [c19d495c-787a-4a2f-94d9-c6c4541fb056] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-t4vkb" [c19d495c-787a-4a2f-94d9-c6c4541fb056] Running
E0108 20:47:17.008418   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003789715s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-140159 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-140159 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-140159 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (66.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-140159 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-140159 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m6.794283221s)
--- PASS: TestNetworkPlugins/group/flannel/Start (66.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (84.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-140159 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-140159 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m24.561283897s)
--- PASS: TestNetworkPlugins/group/bridge/Start (84.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-140159 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-140159 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mhpfp" [106d1d42-ee84-4235-9c1e-4795b6ea4568] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mhpfp" [106d1d42-ee84-4235-9c1e-4795b6ea4568] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005671141s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (118.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-554655 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-554655 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (1m58.180908986s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (118.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-140159 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-140159 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-140159 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (72.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-183849 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0108 20:48:13.023044   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/functional-563235/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-183849 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m12.914825389s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (72.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-q54g7" [7d163752-9e3d-47be-81d2-5b7445f763e8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005496182s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-140159 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-140159 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-txgvr" [b2d5d33c-4ef0-4899-a4e5-5cac930344ec] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-txgvr" [b2d5d33c-4ef0-4899-a4e5-5cac930344ec] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.00501307s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-140159 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-140159 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-140159 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-140159 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-140159 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8c84x" [6f0331db-b6d8-4029-a68f-208e119a0348] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8c84x" [6f0331db-b6d8-4029-a68f-208e119a0348] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.006472068s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-140159 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-140159 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-140159 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)
E0108 20:56:46.933506   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/calico-140159/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (47.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-487560 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-487560 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (47.229992732s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (47.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-183849 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [822d6591-b720-4e5b-9952-2cbfaf101fe6] Pending
helpers_test.go:344: "busybox" [822d6591-b720-4e5b-9952-2cbfaf101fe6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [822d6591-b720-4e5b-9952-2cbfaf101fe6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.00614132s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-183849 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-324156 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-324156 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (39.92756083s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-183849 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-183849 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.867771477s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-183849 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-183849 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-183849 --alsologtostderr -v=3: (12.320150067s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-554655 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7573799a-80d3-47b6-aa26-5960498df5db] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7573799a-80d3-47b6-aa26-5960498df5db] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.003514085s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-554655 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-183849 -n no-preload-183849
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-183849 -n no-preload-183849: exit status 7 (111.282982ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-183849 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (341.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-183849 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-183849 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (5m40.877765936s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-183849 -n no-preload-183849
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (341.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-554655 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-554655 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-554655 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-554655 --alsologtostderr -v=3: (12.370230854s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-487560 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2cf497a3-2222-4eae-aabd-0afcd4f9c933] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2cf497a3-2222-4eae-aabd-0afcd4f9c933] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004895922s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-487560 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-554655 -n old-k8s-version-554655
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-554655 -n old-k8s-version-554655: exit status 7 (137.469841ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-554655 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (428.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-554655 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-554655 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m7.992859896s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-554655 -n old-k8s-version-554655
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (428.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-324156 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8e2be115-2795-408f-93c1-da1f2b59ec6a] Pending
helpers_test.go:344: "busybox" [8e2be115-2795-408f-93c1-da1f2b59ec6a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8e2be115-2795-408f-93c1-da1f2b59ec6a] Running
E0108 20:50:09.845056   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/auto-140159/client.crt: no such file or directory
E0108 20:50:11.125493   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/auto-140159/client.crt: no such file or directory
E0108 20:50:13.686053   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/auto-140159/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003631305s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-324156 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-487560 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-487560 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.179630745s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-487560 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-487560 --alsologtostderr -v=3
E0108 20:50:08.565037   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/auto-140159/client.crt: no such file or directory
E0108 20:50:08.570375   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/auto-140159/client.crt: no such file or directory
E0108 20:50:08.580678   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/auto-140159/client.crt: no such file or directory
E0108 20:50:08.600925   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/auto-140159/client.crt: no such file or directory
E0108 20:50:08.641470   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/auto-140159/client.crt: no such file or directory
E0108 20:50:08.721837   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/auto-140159/client.crt: no such file or directory
E0108 20:50:08.882764   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/auto-140159/client.crt: no such file or directory
E0108 20:50:09.203870   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/auto-140159/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-487560 --alsologtostderr -v=3: (12.286297238s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-324156 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-324156 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.04777914s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-324156 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-324156 --alsologtostderr -v=3
E0108 20:50:18.807313   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/auto-140159/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-324156 --alsologtostderr -v=3: (12.260906839s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-487560 -n embed-certs-487560
E0108 20:50:20.057181   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-487560 -n embed-certs-487560: exit status 7 (106.774802ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-487560 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (613.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-487560 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0108 20:50:22.163943   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/kindnet-140159/client.crt: no such file or directory
E0108 20:50:22.169281   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/kindnet-140159/client.crt: no such file or directory
E0108 20:50:22.179546   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/kindnet-140159/client.crt: no such file or directory
E0108 20:50:22.199908   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/kindnet-140159/client.crt: no such file or directory
E0108 20:50:22.240740   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/kindnet-140159/client.crt: no such file or directory
E0108 20:50:22.321322   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/kindnet-140159/client.crt: no such file or directory
E0108 20:50:22.482509   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/kindnet-140159/client.crt: no such file or directory
E0108 20:50:22.803224   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/kindnet-140159/client.crt: no such file or directory
E0108 20:50:23.443905   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/kindnet-140159/client.crt: no such file or directory
E0108 20:50:24.724786   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/kindnet-140159/client.crt: no such file or directory
E0108 20:50:27.285684   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/kindnet-140159/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-487560 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (10m13.271587906s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-487560 -n embed-certs-487560
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (613.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-324156 -n default-k8s-diff-port-324156
E0108 20:50:29.047753   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/auto-140159/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-324156 -n default-k8s-diff-port-324156: exit status 7 (108.683732ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-324156 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (348.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-324156 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0108 20:50:32.024643   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt: no such file or directory
E0108 20:50:32.406111   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/kindnet-140159/client.crt: no such file or directory
E0108 20:50:42.646673   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/kindnet-140159/client.crt: no such file or directory
E0108 20:50:49.528282   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/auto-140159/client.crt: no such file or directory
E0108 20:51:03.127650   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/kindnet-140159/client.crt: no such file or directory
E0108 20:51:30.489242   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/auto-140159/client.crt: no such file or directory
E0108 20:51:44.088226   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/kindnet-140159/client.crt: no such file or directory
E0108 20:51:46.933698   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/calico-140159/client.crt: no such file or directory
E0108 20:51:46.938997   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/calico-140159/client.crt: no such file or directory
E0108 20:51:46.949342   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/calico-140159/client.crt: no such file or directory
E0108 20:51:46.969695   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/calico-140159/client.crt: no such file or directory
E0108 20:51:47.010089   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/calico-140159/client.crt: no such file or directory
E0108 20:51:47.090529   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/calico-140159/client.crt: no such file or directory
E0108 20:51:47.251142   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/calico-140159/client.crt: no such file or directory
E0108 20:51:47.572082   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/calico-140159/client.crt: no such file or directory
E0108 20:51:48.213097   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/calico-140159/client.crt: no such file or directory
E0108 20:51:49.493277   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/calico-140159/client.crt: no such file or directory
E0108 20:51:52.053408   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/calico-140159/client.crt: no such file or directory
E0108 20:51:57.173990   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/calico-140159/client.crt: no such file or directory
E0108 20:52:07.414792   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/calico-140159/client.crt: no such file or directory
E0108 20:52:07.854209   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/custom-flannel-140159/client.crt: no such file or directory
E0108 20:52:07.859486   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/custom-flannel-140159/client.crt: no such file or directory
E0108 20:52:07.869870   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/custom-flannel-140159/client.crt: no such file or directory
E0108 20:52:07.890248   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/custom-flannel-140159/client.crt: no such file or directory
E0108 20:52:07.930592   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/custom-flannel-140159/client.crt: no such file or directory
E0108 20:52:08.011001   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/custom-flannel-140159/client.crt: no such file or directory
E0108 20:52:08.171493   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/custom-flannel-140159/client.crt: no such file or directory
E0108 20:52:08.492123   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/custom-flannel-140159/client.crt: no such file or directory
E0108 20:52:09.133099   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/custom-flannel-140159/client.crt: no such file or directory
E0108 20:52:10.413638   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/custom-flannel-140159/client.crt: no such file or directory
E0108 20:52:12.973964   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/custom-flannel-140159/client.crt: no such file or directory
E0108 20:52:17.008390   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
E0108 20:52:18.095056   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/custom-flannel-140159/client.crt: no such file or directory
E0108 20:52:27.895876   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/calico-140159/client.crt: no such file or directory
E0108 20:52:28.335567   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/custom-flannel-140159/client.crt: no such file or directory
E0108 20:52:33.605667   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/enable-default-cni-140159/client.crt: no such file or directory
E0108 20:52:33.611007   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/enable-default-cni-140159/client.crt: no such file or directory
E0108 20:52:33.621371   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/enable-default-cni-140159/client.crt: no such file or directory
E0108 20:52:33.641809   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/enable-default-cni-140159/client.crt: no such file or directory
E0108 20:52:33.682095   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/enable-default-cni-140159/client.crt: no such file or directory
E0108 20:52:33.762446   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/enable-default-cni-140159/client.crt: no such file or directory
E0108 20:52:33.922920   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/enable-default-cni-140159/client.crt: no such file or directory
E0108 20:52:34.243571   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/enable-default-cni-140159/client.crt: no such file or directory
E0108 20:52:34.884164   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/enable-default-cni-140159/client.crt: no such file or directory
E0108 20:52:36.165246   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/enable-default-cni-140159/client.crt: no such file or directory
E0108 20:52:38.725896   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/enable-default-cni-140159/client.crt: no such file or directory
E0108 20:52:43.846338   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/enable-default-cni-140159/client.crt: no such file or directory
E0108 20:52:48.816619   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/custom-flannel-140159/client.crt: no such file or directory
E0108 20:52:52.409485   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/auto-140159/client.crt: no such file or directory
E0108 20:52:54.086746   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/enable-default-cni-140159/client.crt: no such file or directory
E0108 20:53:06.008885   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/kindnet-140159/client.crt: no such file or directory
E0108 20:53:08.856188   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/calico-140159/client.crt: no such file or directory
E0108 20:53:13.023697   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/functional-563235/client.crt: no such file or directory
E0108 20:53:14.568019   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/enable-default-cni-140159/client.crt: no such file or directory
E0108 20:53:29.777452   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/custom-flannel-140159/client.crt: no such file or directory
E0108 20:53:31.749207   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/flannel-140159/client.crt: no such file or directory
E0108 20:53:31.754529   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/flannel-140159/client.crt: no such file or directory
E0108 20:53:31.764868   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/flannel-140159/client.crt: no such file or directory
E0108 20:53:31.785234   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/flannel-140159/client.crt: no such file or directory
E0108 20:53:31.825586   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/flannel-140159/client.crt: no such file or directory
E0108 20:53:31.906564   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/flannel-140159/client.crt: no such file or directory
E0108 20:53:32.067104   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/flannel-140159/client.crt: no such file or directory
E0108 20:53:32.387574   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/flannel-140159/client.crt: no such file or directory
E0108 20:53:33.028181   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/flannel-140159/client.crt: no such file or directory
E0108 20:53:34.308753   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/flannel-140159/client.crt: no such file or directory
E0108 20:53:36.869716   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/flannel-140159/client.crt: no such file or directory
E0108 20:53:41.990690   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/flannel-140159/client.crt: no such file or directory
E0108 20:53:52.231116   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/flannel-140159/client.crt: no such file or directory
E0108 20:53:52.250336   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/bridge-140159/client.crt: no such file or directory
E0108 20:53:52.255661   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/bridge-140159/client.crt: no such file or directory
E0108 20:53:52.266034   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/bridge-140159/client.crt: no such file or directory
E0108 20:53:52.286429   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/bridge-140159/client.crt: no such file or directory
E0108 20:53:52.326835   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/bridge-140159/client.crt: no such file or directory
E0108 20:53:52.407209   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/bridge-140159/client.crt: no such file or directory
E0108 20:53:52.567722   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/bridge-140159/client.crt: no such file or directory
E0108 20:53:52.888639   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/bridge-140159/client.crt: no such file or directory
E0108 20:53:53.528912   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/bridge-140159/client.crt: no such file or directory
E0108 20:53:54.809988   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/bridge-140159/client.crt: no such file or directory
E0108 20:53:55.528586   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/enable-default-cni-140159/client.crt: no such file or directory
E0108 20:53:57.370594   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/bridge-140159/client.crt: no such file or directory
E0108 20:54:02.491536   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/bridge-140159/client.crt: no such file or directory
E0108 20:54:12.711682   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/flannel-140159/client.crt: no such file or directory
E0108 20:54:12.731929   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/bridge-140159/client.crt: no such file or directory
E0108 20:54:30.776501   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/calico-140159/client.crt: no such file or directory
E0108 20:54:33.213095   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/bridge-140159/client.crt: no such file or directory
E0108 20:54:51.697774   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/custom-flannel-140159/client.crt: no such file or directory
E0108 20:54:53.672751   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/flannel-140159/client.crt: no such file or directory
E0108 20:55:08.564678   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/auto-140159/client.crt: no such file or directory
E0108 20:55:14.174137   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/bridge-140159/client.crt: no such file or directory
E0108 20:55:17.449471   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/enable-default-cni-140159/client.crt: no such file or directory
E0108 20:55:22.164619   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/kindnet-140159/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-324156 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m48.033568246s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-324156 -n default-k8s-diff-port-324156
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (348.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2htw7" [2001f4c1-fcab-4c6c-8543-a2034c352fc6] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0108 20:55:32.024684   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2htw7" [2001f4c1-fcab-4c6c-8543-a2034c352fc6] Running
E0108 20:55:36.250499   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/auto-140159/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004982648s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2htw7" [2001f4c1-fcab-4c6c-8543-a2034c352fc6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006169955s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-183849 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-183849 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-183849 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-183849 -n no-preload-183849
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-183849 -n no-preload-183849: exit status 2 (394.4422ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-183849 -n no-preload-183849
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-183849 -n no-preload-183849: exit status 2 (375.258367ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-183849 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-183849 -n no-preload-183849
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-183849 -n no-preload-183849
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-207474 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0108 20:56:15.593837   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/flannel-140159/client.crt: no such file or directory
E0108 20:56:16.071927   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/functional-563235/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-207474 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (41.413448825s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nm8d6" [7805a9cb-96d7-4560-8dd7-89a9953700f0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nm8d6" [7805a9cb-96d7-4560-8dd7-89a9953700f0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.004160362s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nm8d6" [7805a9cb-96d7-4560-8dd7-89a9953700f0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006009415s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-324156 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-324156 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-324156 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-324156 -n default-k8s-diff-port-324156
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-324156 -n default-k8s-diff-port-324156: exit status 2 (369.482457ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-324156 -n default-k8s-diff-port-324156
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-324156 -n default-k8s-diff-port-324156: exit status 2 (369.250104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-324156 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-324156 -n default-k8s-diff-port-324156
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-324156 -n default-k8s-diff-port-324156
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-207474 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-207474 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-207474 --alsologtostderr -v=3: (3.720002408s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-207474 -n newest-cni-207474
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-207474 -n newest-cni-207474: exit status 7 (90.636186ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-207474 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-207474 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-207474 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (25.81002002s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-207474 -n newest-cni-207474
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-207474 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-207474 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-207474 -n newest-cni-207474
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-207474 -n newest-cni-207474: exit status 2 (310.574155ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-207474 -n newest-cni-207474
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-207474 -n newest-cni-207474: exit status 2 (305.23648ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-207474 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-207474 -n newest-cni-207474
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-207474 -n newest-cni-207474
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-cx5hn" [e8182438-fb3d-4404-84a3-bb2d60b5422b] Running
E0108 20:57:14.616962   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/calico-140159/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004333036s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-cx5hn" [e8182438-fb3d-4404-84a3-bb2d60b5422b] Running
E0108 20:57:17.009057   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/addons-793365/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003890949s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-554655 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-554655 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-554655 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-554655 -n old-k8s-version-554655
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-554655 -n old-k8s-version-554655: exit status 2 (297.780052ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-554655 -n old-k8s-version-554655
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-554655 -n old-k8s-version-554655: exit status 2 (301.346796ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-554655 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-554655 -n old-k8s-version-554655
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-554655 -n old-k8s-version-554655
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xcx98" [f8c4914d-aa18-4238-9395-645d9abca944] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00375003s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xcx98" [f8c4914d-aa18-4238-9395-645d9abca944] Running
E0108 21:00:44.127025   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/no-preload-183849/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003698012s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-487560 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-487560 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-487560 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-487560 -n embed-certs-487560
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-487560 -n embed-certs-487560: exit status 2 (282.538852ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-487560 -n embed-certs-487560
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-487560 -n embed-certs-487560: exit status 2 (295.037562ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-487560 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-487560 -n embed-certs-487560
E0108 21:00:47.377871   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/default-k8s-diff-port-324156/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-487560 -n embed-certs-487560
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.62s)

                                                
                                    

Test skip (27/316)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-140159 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-140159

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-140159

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-140159

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-140159

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-140159

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-140159

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-140159

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-140159

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-140159

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-140159

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-140159

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-140159" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-140159" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-140159

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140159"

                                                
                                                
----------------------- debugLogs end: kubenet-140159 [took: 5.339799286s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-140159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-140159
--- SKIP: TestNetworkPlugins/group/kubenet (5.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E0108 20:40:32.024446   17761 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-11003/.minikube/profiles/ingress-addon-legacy-592184/client.crt: no such file or directory
panic.go:523: 
----------------------- debugLogs start: cilium-140159 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-140159

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-140159

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-140159

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-140159

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-140159

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-140159

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-140159

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-140159

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-140159

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-140159

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-140159

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-140159" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-140159

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-140159

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-140159

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-140159

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-140159" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-140159" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-140159

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-140159" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140159"

                                                
                                                
----------------------- debugLogs end: cilium-140159 [took: 6.448791849s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-140159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-140159
--- SKIP: TestNetworkPlugins/group/cilium (6.69s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-294626" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-294626
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard