Test Report: Docker_Linux_crio 17885

                    
                      b721bab7b488b5e07b471be256ee12ce84535d3b:2024-01-03:32546
                    
                

Test fail (5/316)

Order failed test Duration
35 TestAddons/parallel/Ingress 154.92
167 TestIngressAddonLegacy/serial/ValidateIngressAddons 187.51
217 TestMultiNode/serial/PingHostFrom2Pods 3.29
239 TestRunningBinaryUpgrade 74.78
247 TestStoppedBinaryUpgrade/Upgrade 107.32
x
+
TestAddons/parallel/Ingress (154.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-173367 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-173367 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-173367 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a0cb4e1f-6d7f-4c4c-9c13-dc40389d5a90] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a0cb4e1f-6d7f-4c4c-9c13-dc40389d5a90] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.00339743s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-173367 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-173367 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.299038245s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-173367 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-173367 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-173367 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-173367 addons disable ingress-dns --alsologtostderr -v=1: (1.035315942s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-173367 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-173367 addons disable ingress --alsologtostderr -v=1: (7.617027771s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-173367
helpers_test.go:235: (dbg) docker inspect addons-173367:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "761357a2d6c3b21548e437efd57d16516164b8e567be2bf4de1f17c07fe8fcc0",
	        "Created": "2024-01-03T18:59:40.946188444Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 17636,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-03T18:59:41.255521095Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b531f61d9bfacb74e25e3fb20a2533b78fa4bf98ca9061755006074ff8c2c789",
	        "ResolvConfPath": "/var/lib/docker/containers/761357a2d6c3b21548e437efd57d16516164b8e567be2bf4de1f17c07fe8fcc0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/761357a2d6c3b21548e437efd57d16516164b8e567be2bf4de1f17c07fe8fcc0/hostname",
	        "HostsPath": "/var/lib/docker/containers/761357a2d6c3b21548e437efd57d16516164b8e567be2bf4de1f17c07fe8fcc0/hosts",
	        "LogPath": "/var/lib/docker/containers/761357a2d6c3b21548e437efd57d16516164b8e567be2bf4de1f17c07fe8fcc0/761357a2d6c3b21548e437efd57d16516164b8e567be2bf4de1f17c07fe8fcc0-json.log",
	        "Name": "/addons-173367",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-173367:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-173367",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d03468f17a9cb6b89662e098286e050cbb6215461f424514be219a602ceef606-init/diff:/var/lib/docker/overlay2/a5364ccac14714ee0f769c339926d51ad0bbde3642ccbcf0e3661d2982bd002b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d03468f17a9cb6b89662e098286e050cbb6215461f424514be219a602ceef606/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d03468f17a9cb6b89662e098286e050cbb6215461f424514be219a602ceef606/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d03468f17a9cb6b89662e098286e050cbb6215461f424514be219a602ceef606/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-173367",
	                "Source": "/var/lib/docker/volumes/addons-173367/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-173367",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-173367",
	                "name.minikube.sigs.k8s.io": "addons-173367",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6b58bfd3cf1bab7391839becf44adedfb6fd9db1ba4eb3432e64a3567b516459",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6b58bfd3cf1b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-173367": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "761357a2d6c3",
	                        "addons-173367"
	                    ],
	                    "NetworkID": "2d7042e3d1a570ccb9d60d9a86cc7953aae868ae9986204e163d34d8fef9cfd2",
	                    "EndpointID": "3a54e0f7995deebe16d517b32ba1b448608826bf0ea0bbb2d78da323728fb9d9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-173367 -n addons-173367
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-173367 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-173367 logs -n 25: (1.171509112s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-365804                                                                     | download-only-365804   | jenkins | v1.32.0 | 03 Jan 24 18:59 UTC | 03 Jan 24 18:59 UTC |
	| delete  | -p download-only-365804                                                                     | download-only-365804   | jenkins | v1.32.0 | 03 Jan 24 18:59 UTC | 03 Jan 24 18:59 UTC |
	| start   | --download-only -p                                                                          | download-docker-079803 | jenkins | v1.32.0 | 03 Jan 24 18:59 UTC |                     |
	|         | download-docker-079803                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-079803                                                                   | download-docker-079803 | jenkins | v1.32.0 | 03 Jan 24 18:59 UTC | 03 Jan 24 18:59 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-757574   | jenkins | v1.32.0 | 03 Jan 24 18:59 UTC |                     |
	|         | binary-mirror-757574                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34341                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-757574                                                                     | binary-mirror-757574   | jenkins | v1.32.0 | 03 Jan 24 18:59 UTC | 03 Jan 24 18:59 UTC |
	| addons  | enable dashboard -p                                                                         | addons-173367          | jenkins | v1.32.0 | 03 Jan 24 18:59 UTC |                     |
	|         | addons-173367                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-173367          | jenkins | v1.32.0 | 03 Jan 24 18:59 UTC |                     |
	|         | addons-173367                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-173367 --wait=true                                                                | addons-173367          | jenkins | v1.32.0 | 03 Jan 24 18:59 UTC | 03 Jan 24 19:01 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-173367          | jenkins | v1.32.0 | 03 Jan 24 19:01 UTC | 03 Jan 24 19:01 UTC |
	|         | -p addons-173367                                                                            |                        |         |         |                     |                     |
	| addons  | addons-173367 addons disable                                                                | addons-173367          | jenkins | v1.32.0 | 03 Jan 24 19:02 UTC | 03 Jan 24 19:02 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-173367 ip                                                                            | addons-173367          | jenkins | v1.32.0 | 03 Jan 24 19:02 UTC | 03 Jan 24 19:02 UTC |
	| addons  | addons-173367 addons disable                                                                | addons-173367          | jenkins | v1.32.0 | 03 Jan 24 19:02 UTC | 03 Jan 24 19:02 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-173367          | jenkins | v1.32.0 | 03 Jan 24 19:02 UTC | 03 Jan 24 19:02 UTC |
	|         | addons-173367                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-173367          | jenkins | v1.32.0 | 03 Jan 24 19:02 UTC | 03 Jan 24 19:02 UTC |
	|         | -p addons-173367                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-173367 ssh cat                                                                       | addons-173367          | jenkins | v1.32.0 | 03 Jan 24 19:02 UTC | 03 Jan 24 19:02 UTC |
	|         | /opt/local-path-provisioner/pvc-2c4082d9-6259-471c-9c2c-8d8a577bcbfb_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-173367 addons disable                                                                | addons-173367          | jenkins | v1.32.0 | 03 Jan 24 19:02 UTC | 03 Jan 24 19:02 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-173367 ssh curl -s                                                                   | addons-173367          | jenkins | v1.32.0 | 03 Jan 24 19:02 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-173367 addons                                                                        | addons-173367          | jenkins | v1.32.0 | 03 Jan 24 19:02 UTC | 03 Jan 24 19:02 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-173367          | jenkins | v1.32.0 | 03 Jan 24 19:02 UTC | 03 Jan 24 19:02 UTC |
	|         | addons-173367                                                                               |                        |         |         |                     |                     |
	| addons  | addons-173367 addons                                                                        | addons-173367          | jenkins | v1.32.0 | 03 Jan 24 19:02 UTC | 03 Jan 24 19:02 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-173367 addons                                                                        | addons-173367          | jenkins | v1.32.0 | 03 Jan 24 19:02 UTC | 03 Jan 24 19:02 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-173367 ip                                                                            | addons-173367          | jenkins | v1.32.0 | 03 Jan 24 19:04 UTC | 03 Jan 24 19:04 UTC |
	| addons  | addons-173367 addons disable                                                                | addons-173367          | jenkins | v1.32.0 | 03 Jan 24 19:04 UTC | 03 Jan 24 19:04 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-173367 addons disable                                                                | addons-173367          | jenkins | v1.32.0 | 03 Jan 24 19:04 UTC | 03 Jan 24 19:04 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 18:59:19
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 18:59:19.774538   16974 out.go:296] Setting OutFile to fd 1 ...
	I0103 18:59:19.774790   16974 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 18:59:19.774799   16974 out.go:309] Setting ErrFile to fd 2...
	I0103 18:59:19.774804   16974 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 18:59:19.775015   16974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-8915/.minikube/bin
	I0103 18:59:19.775662   16974 out.go:303] Setting JSON to false
	I0103 18:59:19.776461   16974 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":2506,"bootTime":1704305854,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 18:59:19.776519   16974 start.go:138] virtualization: kvm guest
	I0103 18:59:19.778752   16974 out.go:177] * [addons-173367] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 18:59:19.780008   16974 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 18:59:19.780012   16974 notify.go:220] Checking for updates...
	I0103 18:59:19.781636   16974 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 18:59:19.783068   16974 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-8915/kubeconfig
	I0103 18:59:19.784461   16974 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-8915/.minikube
	I0103 18:59:19.785748   16974 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 18:59:19.787103   16974 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 18:59:19.788565   16974 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 18:59:19.809012   16974 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 18:59:19.809115   16974 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 18:59:19.854598   16974 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-03 18:59:19.846639098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0103 18:59:19.854692   16974 docker.go:295] overlay module found
	I0103 18:59:19.856650   16974 out.go:177] * Using the docker driver based on user configuration
	I0103 18:59:19.857977   16974 start.go:298] selected driver: docker
	I0103 18:59:19.857992   16974 start.go:902] validating driver "docker" against <nil>
	I0103 18:59:19.858002   16974 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 18:59:19.859082   16974 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 18:59:19.909604   16974 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-03 18:59:19.902192175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0103 18:59:19.909758   16974 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0103 18:59:19.909945   16974 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0103 18:59:19.911813   16974 out.go:177] * Using Docker driver with root privileges
	I0103 18:59:19.913189   16974 cni.go:84] Creating CNI manager for ""
	I0103 18:59:19.913206   16974 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 18:59:19.913216   16974 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0103 18:59:19.913225   16974 start_flags.go:323] config:
	{Name:addons-173367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-173367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 18:59:19.914819   16974 out.go:177] * Starting control plane node addons-173367 in cluster addons-173367
	I0103 18:59:19.916123   16974 cache.go:121] Beginning downloading kic base image for docker with crio
	I0103 18:59:19.917573   16974 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0103 18:59:19.918948   16974 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 18:59:19.918980   16974 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17885-8915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0103 18:59:19.918975   16974 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 18:59:19.918985   16974 cache.go:56] Caching tarball of preloaded images
	I0103 18:59:19.919091   16974 preload.go:174] Found /home/jenkins/minikube-integration/17885-8915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0103 18:59:19.919103   16974 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0103 18:59:19.919382   16974 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/config.json ...
	I0103 18:59:19.919405   16974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/config.json: {Name:mk9a614e805fa30248987953a55588ee7c05e881 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:59:19.933252   16974 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0103 18:59:19.933343   16974 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I0103 18:59:19.933358   16974 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory, skipping pull
	I0103 18:59:19.933361   16974 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in cache, skipping pull
	I0103 18:59:19.933369   16974 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I0103 18:59:19.933375   16974 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c from local cache
	I0103 18:59:31.334046   16974 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c from cached tarball
	I0103 18:59:31.334081   16974 cache.go:194] Successfully downloaded all kic artifacts
	I0103 18:59:31.334112   16974 start.go:365] acquiring machines lock for addons-173367: {Name:mkca21c60edcba3605766cd86fe848df4c24aff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 18:59:31.334243   16974 start.go:369] acquired machines lock for "addons-173367" in 112.782µs
	I0103 18:59:31.334267   16974 start.go:93] Provisioning new machine with config: &{Name:addons-173367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-173367 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 18:59:31.334340   16974 start.go:125] createHost starting for "" (driver="docker")
	I0103 18:59:31.337307   16974 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0103 18:59:31.337514   16974 start.go:159] libmachine.API.Create for "addons-173367" (driver="docker")
	I0103 18:59:31.337544   16974 client.go:168] LocalClient.Create starting
	I0103 18:59:31.337642   16974 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem
	I0103 18:59:31.700332   16974 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/cert.pem
	I0103 18:59:31.896489   16974 cli_runner.go:164] Run: docker network inspect addons-173367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0103 18:59:31.911314   16974 cli_runner.go:211] docker network inspect addons-173367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0103 18:59:31.911384   16974 network_create.go:281] running [docker network inspect addons-173367] to gather additional debugging logs...
	I0103 18:59:31.911408   16974 cli_runner.go:164] Run: docker network inspect addons-173367
	W0103 18:59:31.925317   16974 cli_runner.go:211] docker network inspect addons-173367 returned with exit code 1
	I0103 18:59:31.925342   16974 network_create.go:284] error running [docker network inspect addons-173367]: docker network inspect addons-173367: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-173367 not found
	I0103 18:59:31.925353   16974 network_create.go:286] output of [docker network inspect addons-173367]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-173367 not found
	
	** /stderr **
	I0103 18:59:31.925420   16974 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0103 18:59:31.940368   16974 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002829fb0}
	I0103 18:59:31.940414   16974 network_create.go:124] attempt to create docker network addons-173367 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0103 18:59:31.940449   16974 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-173367 addons-173367
	I0103 18:59:31.989991   16974 network_create.go:108] docker network addons-173367 192.168.49.0/24 created
	I0103 18:59:31.990028   16974 kic.go:121] calculated static IP "192.168.49.2" for the "addons-173367" container
	I0103 18:59:31.990075   16974 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0103 18:59:32.003758   16974 cli_runner.go:164] Run: docker volume create addons-173367 --label name.minikube.sigs.k8s.io=addons-173367 --label created_by.minikube.sigs.k8s.io=true
	I0103 18:59:32.019285   16974 oci.go:103] Successfully created a docker volume addons-173367
	I0103 18:59:32.019354   16974 cli_runner.go:164] Run: docker run --rm --name addons-173367-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-173367 --entrypoint /usr/bin/test -v addons-173367:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I0103 18:59:35.738745   16974 cli_runner.go:217] Completed: docker run --rm --name addons-173367-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-173367 --entrypoint /usr/bin/test -v addons-173367:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib: (3.71934043s)
	I0103 18:59:35.738770   16974 oci.go:107] Successfully prepared a docker volume addons-173367
	I0103 18:59:35.738796   16974 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 18:59:35.738819   16974 kic.go:194] Starting extracting preloaded images to volume ...
	I0103 18:59:35.738890   16974 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17885-8915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-173367:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I0103 18:59:40.881310   16974 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17885-8915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-173367:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (5.142384044s)
	I0103 18:59:40.881339   16974 kic.go:203] duration metric: took 5.142518 seconds to extract preloaded images to volume
	W0103 18:59:40.881489   16974 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0103 18:59:40.881576   16974 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0103 18:59:40.932563   16974 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-173367 --name addons-173367 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-173367 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-173367 --network addons-173367 --ip 192.168.49.2 --volume addons-173367:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0103 18:59:41.263529   16974 cli_runner.go:164] Run: docker container inspect addons-173367 --format={{.State.Running}}
	I0103 18:59:41.280310   16974 cli_runner.go:164] Run: docker container inspect addons-173367 --format={{.State.Status}}
	I0103 18:59:41.297510   16974 cli_runner.go:164] Run: docker exec addons-173367 stat /var/lib/dpkg/alternatives/iptables
	I0103 18:59:41.360186   16974 oci.go:144] the created container "addons-173367" has a running status.
	I0103 18:59:41.360222   16974 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17885-8915/.minikube/machines/addons-173367/id_rsa...
	I0103 18:59:41.475577   16974 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17885-8915/.minikube/machines/addons-173367/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0103 18:59:41.493361   16974 cli_runner.go:164] Run: docker container inspect addons-173367 --format={{.State.Status}}
	I0103 18:59:41.510398   16974 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0103 18:59:41.510424   16974 kic_runner.go:114] Args: [docker exec --privileged addons-173367 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0103 18:59:41.590148   16974 cli_runner.go:164] Run: docker container inspect addons-173367 --format={{.State.Status}}
	I0103 18:59:41.611398   16974 machine.go:88] provisioning docker machine ...
	I0103 18:59:41.611437   16974 ubuntu.go:169] provisioning hostname "addons-173367"
	I0103 18:59:41.611504   16974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-173367
	I0103 18:59:41.628697   16974 main.go:141] libmachine: Using SSH client type: native
	I0103 18:59:41.629149   16974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0103 18:59:41.629176   16974 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-173367 && echo "addons-173367" | sudo tee /etc/hostname
	I0103 18:59:41.630768   16974 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57970->127.0.0.1:32772: read: connection reset by peer
	I0103 18:59:44.759743   16974 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-173367
	
	I0103 18:59:44.759830   16974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-173367
	I0103 18:59:44.778200   16974 main.go:141] libmachine: Using SSH client type: native
	I0103 18:59:44.778551   16974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0103 18:59:44.778576   16974 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-173367' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-173367/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-173367' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 18:59:44.893842   16974 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 18:59:44.893868   16974 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17885-8915/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-8915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-8915/.minikube}
	I0103 18:59:44.893887   16974 ubuntu.go:177] setting up certificates
	I0103 18:59:44.893898   16974 provision.go:83] configureAuth start
	I0103 18:59:44.893946   16974 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-173367
	I0103 18:59:44.910472   16974 provision.go:138] copyHostCerts
	I0103 18:59:44.910547   16974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-8915/.minikube/ca.pem (1078 bytes)
	I0103 18:59:44.910701   16974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-8915/.minikube/cert.pem (1123 bytes)
	I0103 18:59:44.910771   16974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-8915/.minikube/key.pem (1679 bytes)
	I0103 18:59:44.910829   16974 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-8915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca-key.pem org=jenkins.addons-173367 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-173367]
	I0103 18:59:44.983371   16974 provision.go:172] copyRemoteCerts
	I0103 18:59:44.983417   16974 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 18:59:44.983446   16974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-173367
	I0103 18:59:44.998539   16974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/addons-173367/id_rsa Username:docker}
	I0103 18:59:45.085915   16974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 18:59:45.105794   16974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0103 18:59:45.125275   16974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 18:59:45.145658   16974 provision.go:86] duration metric: configureAuth took 251.734463ms
	I0103 18:59:45.145687   16974 ubuntu.go:193] setting minikube options for container-runtime
	I0103 18:59:45.145858   16974 config.go:182] Loaded profile config "addons-173367": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 18:59:45.145968   16974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-173367
	I0103 18:59:45.162980   16974 main.go:141] libmachine: Using SSH client type: native
	I0103 18:59:45.163296   16974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0103 18:59:45.163311   16974 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 18:59:45.363929   16974 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 18:59:45.363959   16974 machine.go:91] provisioned docker machine in 3.752534614s
	I0103 18:59:45.363971   16974 client.go:171] LocalClient.Create took 14.02641915s
	I0103 18:59:45.363990   16974 start.go:167] duration metric: libmachine.API.Create for "addons-173367" took 14.026476608s
	I0103 18:59:45.363999   16974 start.go:300] post-start starting for "addons-173367" (driver="docker")
	I0103 18:59:45.364012   16974 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 18:59:45.364078   16974 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 18:59:45.364134   16974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-173367
	I0103 18:59:45.380919   16974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/addons-173367/id_rsa Username:docker}
	I0103 18:59:45.466385   16974 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 18:59:45.469126   16974 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0103 18:59:45.469154   16974 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0103 18:59:45.469165   16974 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0103 18:59:45.469172   16974 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0103 18:59:45.469182   16974 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-8915/.minikube/addons for local assets ...
	I0103 18:59:45.469252   16974 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-8915/.minikube/files for local assets ...
	I0103 18:59:45.469278   16974 start.go:303] post-start completed in 105.272954ms
	I0103 18:59:45.469625   16974 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-173367
	I0103 18:59:45.485021   16974 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/config.json ...
	I0103 18:59:45.485285   16974 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 18:59:45.485329   16974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-173367
	I0103 18:59:45.501040   16974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/addons-173367/id_rsa Username:docker}
	I0103 18:59:45.582549   16974 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0103 18:59:45.586448   16974 start.go:128] duration metric: createHost completed in 14.252093128s
	I0103 18:59:45.586471   16974 start.go:83] releasing machines lock for "addons-173367", held for 14.252215616s
	I0103 18:59:45.586540   16974 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-173367
	I0103 18:59:45.602575   16974 ssh_runner.go:195] Run: cat /version.json
	I0103 18:59:45.602616   16974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-173367
	I0103 18:59:45.602687   16974 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 18:59:45.602771   16974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-173367
	I0103 18:59:45.619620   16974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/addons-173367/id_rsa Username:docker}
	I0103 18:59:45.620535   16974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/addons-173367/id_rsa Username:docker}
	I0103 18:59:45.701401   16974 ssh_runner.go:195] Run: systemctl --version
	I0103 18:59:45.789505   16974 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 18:59:45.925202   16974 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 18:59:45.929160   16974 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 18:59:45.945818   16974 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0103 18:59:45.945905   16974 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 18:59:45.971933   16974 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0103 18:59:45.971956   16974 start.go:475] detecting cgroup driver to use...
	I0103 18:59:45.971988   16974 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 18:59:45.972034   16974 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 18:59:45.984374   16974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 18:59:45.993350   16974 docker.go:203] disabling cri-docker service (if available) ...
	I0103 18:59:45.993405   16974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 18:59:46.004623   16974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 18:59:46.016107   16974 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 18:59:46.087322   16974 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 18:59:46.170384   16974 docker.go:219] disabling docker service ...
	I0103 18:59:46.170456   16974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 18:59:46.186502   16974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 18:59:46.196072   16974 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 18:59:46.267598   16974 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 18:59:46.351310   16974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 18:59:46.361323   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 18:59:46.375016   16974 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 18:59:46.375075   16974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 18:59:46.383241   16974 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 18:59:46.383293   16974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 18:59:46.391745   16974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 18:59:46.399473   16974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 18:59:46.407570   16974 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 18:59:46.415192   16974 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 18:59:46.421747   16974 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 18:59:46.428398   16974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 18:59:46.503099   16974 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 18:59:46.602624   16974 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 18:59:46.602698   16974 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 18:59:46.605803   16974 start.go:543] Will wait 60s for crictl version
	I0103 18:59:46.605856   16974 ssh_runner.go:195] Run: which crictl
	I0103 18:59:46.608689   16974 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 18:59:46.639413   16974 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0103 18:59:46.639517   16974 ssh_runner.go:195] Run: crio --version
	I0103 18:59:46.671105   16974 ssh_runner.go:195] Run: crio --version
	I0103 18:59:46.705347   16974 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0103 18:59:46.706664   16974 cli_runner.go:164] Run: docker network inspect addons-173367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0103 18:59:46.721929   16974 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0103 18:59:46.725303   16974 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 18:59:46.734997   16974 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 18:59:46.735052   16974 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 18:59:46.786325   16974 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 18:59:46.786345   16974 crio.go:415] Images already preloaded, skipping extraction
	I0103 18:59:46.786389   16974 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 18:59:46.815734   16974 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 18:59:46.815756   16974 cache_images.go:84] Images are preloaded, skipping loading
	I0103 18:59:46.815826   16974 ssh_runner.go:195] Run: crio config
	I0103 18:59:46.853222   16974 cni.go:84] Creating CNI manager for ""
	I0103 18:59:46.853239   16974 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 18:59:46.853257   16974 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 18:59:46.853275   16974 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-173367 NodeName:addons-173367 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 18:59:46.853416   16974 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-173367"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 18:59:46.853469   16974 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-173367 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-173367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 18:59:46.853527   16974 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 18:59:46.861253   16974 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 18:59:46.861320   16974 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 18:59:46.868597   16974 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0103 18:59:46.883294   16974 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 18:59:46.897879   16974 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0103 18:59:46.912495   16974 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0103 18:59:46.915491   16974 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 18:59:46.924707   16974 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367 for IP: 192.168.49.2
	I0103 18:59:46.924738   16974 certs.go:190] acquiring lock for shared ca certs: {Name:mk5aa238e4284ee43cf20f760a8d5a161bd1dece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:59:46.924867   16974 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17885-8915/.minikube/ca.key
	I0103 18:59:47.298937   16974 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt ...
	I0103 18:59:47.298970   16974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt: {Name:mkfb64827d4f285d1748d24434596a544c9daad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:59:47.299138   16974 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-8915/.minikube/ca.key ...
	I0103 18:59:47.299148   16974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/ca.key: {Name:mkc809a6ce0fd18cdc77881670fc93dd14365f8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:59:47.299222   16974 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17885-8915/.minikube/proxy-client-ca.key
	I0103 18:59:47.357494   16974 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-8915/.minikube/proxy-client-ca.crt ...
	I0103 18:59:47.357520   16974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/proxy-client-ca.crt: {Name:mk38ce455cde7a20e34867254aad7e3cd01d0b4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:59:47.357664   16974 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-8915/.minikube/proxy-client-ca.key ...
	I0103 18:59:47.357673   16974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/proxy-client-ca.key: {Name:mk42c4bf503b461c3a927744ba874b11164fd936 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:59:47.357762   16974 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.key
	I0103 18:59:47.357776   16974 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt with IP's: []
	I0103 18:59:47.660502   16974 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt ...
	I0103 18:59:47.660534   16974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: {Name:mk92a379efcbdf13d9a66c05f7834b1c54cf0e0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:59:47.660690   16974 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.key ...
	I0103 18:59:47.660701   16974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.key: {Name:mkf06c37eb8abbd027fc0af74812ac586d333df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:59:47.660768   16974 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/apiserver.key.dd3b5fb2
	I0103 18:59:47.660784   16974 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0103 18:59:47.925568   16974 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/apiserver.crt.dd3b5fb2 ...
	I0103 18:59:47.925599   16974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/apiserver.crt.dd3b5fb2: {Name:mk98175ca3f36ec33218ffead73dc1f8e13a7623 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:59:47.925756   16974 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/apiserver.key.dd3b5fb2 ...
	I0103 18:59:47.925768   16974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/apiserver.key.dd3b5fb2: {Name:mk39b7664f403a8f52555815e7f870788c26a8fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:59:47.925834   16974 certs.go:337] copying /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/apiserver.crt
	I0103 18:59:47.925900   16974 certs.go:341] copying /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/apiserver.key
	I0103 18:59:47.925947   16974 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/proxy-client.key
	I0103 18:59:47.925961   16974 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/proxy-client.crt with IP's: []
	I0103 18:59:48.007768   16974 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/proxy-client.crt ...
	I0103 18:59:48.007798   16974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/proxy-client.crt: {Name:mk655c79bd10ed2118c73b1a64b77f021e599996 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:59:48.007948   16974 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/proxy-client.key ...
	I0103 18:59:48.007958   16974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/proxy-client.key: {Name:mk9ae6f31d46d11da65a7f9e66bb21a7bebf8c1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:59:48.008118   16974 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 18:59:48.008154   16974 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem (1078 bytes)
	I0103 18:59:48.008181   16974 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/home/jenkins/minikube-integration/17885-8915/.minikube/certs/cert.pem (1123 bytes)
	I0103 18:59:48.008207   16974 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/home/jenkins/minikube-integration/17885-8915/.minikube/certs/key.pem (1679 bytes)
	I0103 18:59:48.008734   16974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 18:59:48.030449   16974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0103 18:59:48.051054   16974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 18:59:48.071055   16974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0103 18:59:48.090581   16974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 18:59:48.110123   16974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0103 18:59:48.129663   16974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 18:59:48.149416   16974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0103 18:59:48.168683   16974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 18:59:48.188382   16974 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 18:59:48.202893   16974 ssh_runner.go:195] Run: openssl version
	I0103 18:59:48.207510   16974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 18:59:48.215263   16974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 18:59:48.218058   16974 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:59 /usr/share/ca-certificates/minikubeCA.pem
	I0103 18:59:48.218096   16974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 18:59:48.223962   16974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 18:59:48.231784   16974 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 18:59:48.234652   16974 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 18:59:48.234695   16974 kubeadm.go:404] StartCluster: {Name:addons-173367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-173367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 18:59:48.234772   16974 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 18:59:48.234806   16974 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 18:59:48.265940   16974 cri.go:89] found id: ""
	I0103 18:59:48.266000   16974 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 18:59:48.273678   16974 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 18:59:48.281243   16974 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0103 18:59:48.281294   16974 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 18:59:48.288741   16974 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 18:59:48.288781   16974 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0103 18:59:48.363990   16974 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I0103 18:59:48.423879   16974 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 18:59:58.079943   16974 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0103 18:59:58.080039   16974 kubeadm.go:322] [preflight] Running pre-flight checks
	I0103 18:59:58.080172   16974 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0103 18:59:58.080263   16974 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I0103 18:59:58.080311   16974 kubeadm.go:322] OS: Linux
	I0103 18:59:58.080374   16974 kubeadm.go:322] CGROUPS_CPU: enabled
	I0103 18:59:58.080439   16974 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0103 18:59:58.080520   16974 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0103 18:59:58.080594   16974 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0103 18:59:58.080657   16974 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0103 18:59:58.080721   16974 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0103 18:59:58.080788   16974 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0103 18:59:58.080861   16974 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0103 18:59:58.080929   16974 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0103 18:59:58.081038   16974 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 18:59:58.081176   16974 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 18:59:58.081279   16974 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 18:59:58.081367   16974 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 18:59:58.083061   16974 out.go:204]   - Generating certificates and keys ...
	I0103 18:59:58.083148   16974 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0103 18:59:58.083246   16974 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0103 18:59:58.083345   16974 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0103 18:59:58.083427   16974 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0103 18:59:58.083489   16974 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0103 18:59:58.083546   16974 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0103 18:59:58.083631   16974 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0103 18:59:58.083798   16974 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-173367 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0103 18:59:58.083864   16974 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0103 18:59:58.083993   16974 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-173367 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0103 18:59:58.084064   16974 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0103 18:59:58.084138   16974 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0103 18:59:58.084190   16974 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0103 18:59:58.084261   16974 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 18:59:58.084338   16974 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 18:59:58.084407   16974 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 18:59:58.084491   16974 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 18:59:58.084572   16974 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 18:59:58.084703   16974 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 18:59:58.084801   16974 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 18:59:58.087243   16974 out.go:204]   - Booting up control plane ...
	I0103 18:59:58.087341   16974 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 18:59:58.087410   16974 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 18:59:58.087480   16974 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 18:59:58.087576   16974 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 18:59:58.087655   16974 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 18:59:58.087704   16974 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0103 18:59:58.087840   16974 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0103 18:59:58.087903   16974 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002047 seconds
	I0103 18:59:58.087992   16974 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0103 18:59:58.088098   16974 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0103 18:59:58.088153   16974 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0103 18:59:58.088305   16974 kubeadm.go:322] [mark-control-plane] Marking the node addons-173367 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0103 18:59:58.088370   16974 kubeadm.go:322] [bootstrap-token] Using token: cb0t7n.rkwzoc23zq72ya17
	I0103 18:59:58.089897   16974 out.go:204]   - Configuring RBAC rules ...
	I0103 18:59:58.089995   16974 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0103 18:59:58.090077   16974 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0103 18:59:58.090238   16974 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0103 18:59:58.090390   16974 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0103 18:59:58.090558   16974 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0103 18:59:58.090658   16974 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0103 18:59:58.090784   16974 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0103 18:59:58.090840   16974 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0103 18:59:58.090902   16974 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0103 18:59:58.090911   16974 kubeadm.go:322] 
	I0103 18:59:58.090972   16974 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0103 18:59:58.090980   16974 kubeadm.go:322] 
	I0103 18:59:58.091062   16974 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0103 18:59:58.091073   16974 kubeadm.go:322] 
	I0103 18:59:58.091098   16974 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0103 18:59:58.091147   16974 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0103 18:59:58.091193   16974 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0103 18:59:58.091199   16974 kubeadm.go:322] 
	I0103 18:59:58.091242   16974 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0103 18:59:58.091248   16974 kubeadm.go:322] 
	I0103 18:59:58.091298   16974 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0103 18:59:58.091313   16974 kubeadm.go:322] 
	I0103 18:59:58.091385   16974 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0103 18:59:58.091481   16974 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0103 18:59:58.091568   16974 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0103 18:59:58.091576   16974 kubeadm.go:322] 
	I0103 18:59:58.091673   16974 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0103 18:59:58.091778   16974 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0103 18:59:58.091787   16974 kubeadm.go:322] 
	I0103 18:59:58.091886   16974 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token cb0t7n.rkwzoc23zq72ya17 \
	I0103 18:59:58.092000   16974 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6fada62843f74bbefe3d0de7ede254ffc49257c2300ab57b02a33590c9b388a1 \
	I0103 18:59:58.092020   16974 kubeadm.go:322] 	--control-plane 
	I0103 18:59:58.092026   16974 kubeadm.go:322] 
	I0103 18:59:58.092099   16974 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0103 18:59:58.092105   16974 kubeadm.go:322] 
	I0103 18:59:58.092179   16974 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token cb0t7n.rkwzoc23zq72ya17 \
	I0103 18:59:58.092278   16974 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6fada62843f74bbefe3d0de7ede254ffc49257c2300ab57b02a33590c9b388a1 
	I0103 18:59:58.092288   16974 cni.go:84] Creating CNI manager for ""
	I0103 18:59:58.092294   16974 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 18:59:58.093870   16974 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0103 18:59:58.095431   16974 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0103 18:59:58.098932   16974 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0103 18:59:58.098951   16974 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0103 18:59:58.114424   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0103 18:59:58.736554   16974 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 18:59:58.736640   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:59:58.736663   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a minikube.k8s.io/name=addons-173367 minikube.k8s.io/updated_at=2024_01_03T18_59_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:59:58.743289   16974 ops.go:34] apiserver oom_adj: -16
	I0103 18:59:58.802003   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:59:59.302735   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 18:59:59.802398   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:00:00.303059   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:00:00.802695   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:00:01.302840   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:00:01.803040   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:00:02.302311   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:00:02.802586   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:00:03.302883   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:00:03.802198   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:00:04.302280   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:00:04.802540   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:00:05.302920   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:00:05.802379   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:00:06.302258   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:00:06.802254   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:00:07.302164   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:00:07.802338   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:00:08.302042   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:00:08.802567   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:00:09.302101   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:00:09.802682   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:00:10.302273   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:00:10.802172   16974 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:00:10.876761   16974 kubeadm.go:1088] duration metric: took 12.140176618s to wait for elevateKubeSystemPrivileges.
	I0103 19:00:10.876796   16974 kubeadm.go:406] StartCluster complete in 22.642103678s
	I0103 19:00:10.876816   16974 settings.go:142] acquiring lock: {Name:mk6273be8cd3d06b021992a8bd25ebbd6366b42d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:00:10.876946   16974 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-8915/kubeconfig
	I0103 19:00:10.877344   16974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/kubeconfig: {Name:mke772e93691b15e3e729ce43b6e84f73895395b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:00:10.877527   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 19:00:10.877592   16974 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0103 19:00:10.877674   16974 addons.go:69] Setting default-storageclass=true in profile "addons-173367"
	I0103 19:00:10.877684   16974 addons.go:69] Setting yakd=true in profile "addons-173367"
	I0103 19:00:10.877699   16974 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-173367"
	I0103 19:00:10.877709   16974 addons.go:237] Setting addon yakd=true in "addons-173367"
	I0103 19:00:10.877704   16974 addons.go:69] Setting cloud-spanner=true in profile "addons-173367"
	I0103 19:00:10.877731   16974 addons.go:237] Setting addon cloud-spanner=true in "addons-173367"
	I0103 19:00:10.877734   16974 addons.go:69] Setting metrics-server=true in profile "addons-173367"
	I0103 19:00:10.877745   16974 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-173367"
	I0103 19:00:10.877769   16974 config.go:182] Loaded profile config "addons-173367": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:00:10.877771   16974 addons.go:237] Setting addon metrics-server=true in "addons-173367"
	I0103 19:00:10.877774   16974 addons.go:69] Setting ingress=true in profile "addons-173367"
	I0103 19:00:10.877798   16974 host.go:66] Checking if "addons-173367" exists ...
	I0103 19:00:10.877811   16974 addons.go:69] Setting ingress-dns=true in profile "addons-173367"
	I0103 19:00:10.877812   16974 addons.go:237] Setting addon ingress=true in "addons-173367"
	I0103 19:00:10.877818   16974 host.go:66] Checking if "addons-173367" exists ...
	I0103 19:00:10.877822   16974 addons.go:237] Setting addon ingress-dns=true in "addons-173367"
	I0103 19:00:10.877824   16974 addons.go:69] Setting inspektor-gadget=true in profile "addons-173367"
	I0103 19:00:10.877828   16974 addons.go:237] Setting addon csi-hostpath-driver=true in "addons-173367"
	I0103 19:00:10.877839   16974 addons.go:237] Setting addon inspektor-gadget=true in "addons-173367"
	I0103 19:00:10.877868   16974 host.go:66] Checking if "addons-173367" exists ...
	I0103 19:00:10.877869   16974 host.go:66] Checking if "addons-173367" exists ...
	I0103 19:00:10.877871   16974 host.go:66] Checking if "addons-173367" exists ...
	I0103 19:00:10.877882   16974 host.go:66] Checking if "addons-173367" exists ...
	I0103 19:00:10.878117   16974 cli_runner.go:164] Run: docker container inspect addons-173367 --format={{.State.Status}}
	I0103 19:00:10.878316   16974 cli_runner.go:164] Run: docker container inspect addons-173367 --format={{.State.Status}}
	I0103 19:00:10.878319   16974 cli_runner.go:164] Run: docker container inspect addons-173367 --format={{.State.Status}}
	I0103 19:00:10.878321   16974 cli_runner.go:164] Run: docker container inspect addons-173367 --format={{.State.Status}}
	I0103 19:00:10.878331   16974 addons.go:69] Setting storage-provisioner=true in profile "addons-173367"
	I0103 19:00:10.878344   16974 addons.go:237] Setting addon storage-provisioner=true in "addons-173367"
	I0103 19:00:10.878374   16974 host.go:66] Checking if "addons-173367" exists ...
	I0103 19:00:10.878410   16974 cli_runner.go:164] Run: docker container inspect addons-173367 --format={{.State.Status}}
	I0103 19:00:10.878562   16974 cli_runner.go:164] Run: docker container inspect addons-173367 --format={{.State.Status}}
	I0103 19:00:10.878599   16974 host.go:66] Checking if "addons-173367" exists ...
	I0103 19:00:10.878792   16974 cli_runner.go:164] Run: docker container inspect addons-173367 --format={{.State.Status}}
	I0103 19:00:10.878992   16974 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-173367"
	I0103 19:00:10.879015   16974 cli_runner.go:164] Run: docker container inspect addons-173367 --format={{.State.Status}}
	I0103 19:00:10.879076   16974 addons.go:69] Setting helm-tiller=true in profile "addons-173367"
	I0103 19:00:10.879093   16974 addons.go:237] Setting addon helm-tiller=true in "addons-173367"
	I0103 19:00:10.879146   16974 host.go:66] Checking if "addons-173367" exists ...
	I0103 19:00:10.879592   16974 cli_runner.go:164] Run: docker container inspect addons-173367 --format={{.State.Status}}
	I0103 19:00:10.879016   16974 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-173367"
	I0103 19:00:10.882234   16974 cli_runner.go:164] Run: docker container inspect addons-173367 --format={{.State.Status}}
	I0103 19:00:10.878316   16974 cli_runner.go:164] Run: docker container inspect addons-173367 --format={{.State.Status}}
	I0103 19:00:10.879042   16974 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-173367"
	I0103 19:00:10.883415   16974 addons.go:237] Setting addon nvidia-device-plugin=true in "addons-173367"
	I0103 19:00:10.883477   16974 host.go:66] Checking if "addons-173367" exists ...
	I0103 19:00:10.879030   16974 addons.go:69] Setting volumesnapshots=true in profile "addons-173367"
	I0103 19:00:10.883829   16974 addons.go:237] Setting addon volumesnapshots=true in "addons-173367"
	I0103 19:00:10.883877   16974 host.go:66] Checking if "addons-173367" exists ...
	I0103 19:00:10.879064   16974 addons.go:69] Setting registry=true in profile "addons-173367"
	I0103 19:00:10.883929   16974 addons.go:237] Setting addon registry=true in "addons-173367"
	I0103 19:00:10.883985   16974 host.go:66] Checking if "addons-173367" exists ...
	I0103 19:00:10.879053   16974 addons.go:69] Setting gcp-auth=true in profile "addons-173367"
	I0103 19:00:10.884622   16974 mustload.go:65] Loading cluster: addons-173367
	I0103 19:00:10.884822   16974 config.go:182] Loaded profile config "addons-173367": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:00:10.885079   16974 cli_runner.go:164] Run: docker container inspect addons-173367 --format={{.State.Status}}
	I0103 19:00:10.924493   16974 cli_runner.go:164] Run: docker container inspect addons-173367 --format={{.State.Status}}
	I0103 19:00:10.924616   16974 cli_runner.go:164] Run: docker container inspect addons-173367 --format={{.State.Status}}
	I0103 19:00:10.925013   16974 addons.go:237] Setting addon default-storageclass=true in "addons-173367"
	I0103 19:00:10.927058   16974 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0103 19:00:10.925512   16974 host.go:66] Checking if "addons-173367" exists ...
	I0103 19:00:10.925876   16974 cli_runner.go:164] Run: docker container inspect addons-173367 --format={{.State.Status}}
	I0103 19:00:10.928770   16974 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0103 19:00:10.928854   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0103 19:00:10.928916   16974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-173367
	I0103 19:00:10.934788   16974 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0103 19:00:10.929386   16974 cli_runner.go:164] Run: docker container inspect addons-173367 --format={{.State.Status}}
	I0103 19:00:10.928784   16974 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0103 19:00:10.928788   16974 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0103 19:00:10.928793   16974 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 19:00:10.929393   16974 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0103 19:00:10.928778   16974 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0103 19:00:10.937305   16974 addons.go:429] installing /etc/kubernetes/addons/deployment.yaml
	I0103 19:00:10.939820   16974 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0103 19:00:10.939838   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0103 19:00:10.941815   16974 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0103 19:00:10.941910   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0103 19:00:10.941914   16974 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 19:00:10.941929   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 19:00:10.943333   16974 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0103 19:00:10.941983   16974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-173367
	I0103 19:00:10.941983   16974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-173367
	I0103 19:00:10.942009   16974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-173367
	I0103 19:00:10.942015   16974 addons.go:429] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0103 19:00:10.946759   16974 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0103 19:00:10.947077   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0103 19:00:10.949980   16974 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.24.0
	I0103 19:00:10.953846   16974 addons.go:429] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0103 19:00:10.950415   16974 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0103 19:00:10.950483   16974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-173367
	I0103 19:00:10.953865   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0103 19:00:10.958895   16974 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0103 19:00:10.956912   16974 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0103 19:00:10.957134   16974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-173367
	I0103 19:00:10.961191   16974 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0103 19:00:10.961579   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0103 19:00:10.961649   16974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-173367
	I0103 19:00:10.963337   16974 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0103 19:00:10.963355   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0103 19:00:10.963406   16974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-173367
	I0103 19:00:10.965379   16974 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0103 19:00:10.967490   16974 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0103 19:00:10.967437   16974 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0103 19:00:10.967468   16974 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0103 19:00:10.969079   16974 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0103 19:00:10.970907   16974 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0103 19:00:10.972925   16974 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0103 19:00:10.972943   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0103 19:00:10.974570   16974 out.go:177]   - Using image docker.io/registry:2.8.3
	I0103 19:00:10.974704   16974 addons.go:429] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0103 19:00:10.974777   16974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-173367
	I0103 19:00:10.976622   16974 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0103 19:00:10.976637   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0103 19:00:10.978202   16974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/addons-173367/id_rsa Username:docker}
	I0103 19:00:10.980973   16974 addons.go:429] installing /etc/kubernetes/addons/registry-rc.yaml
	I0103 19:00:10.981003   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0103 19:00:10.981047   16974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-173367
	I0103 19:00:10.978672   16974 addons.go:429] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0103 19:00:10.981218   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0103 19:00:10.981269   16974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-173367
	I0103 19:00:10.978727   16974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-173367
	I0103 19:00:10.986556   16974 host.go:66] Checking if "addons-173367" exists ...
	I0103 19:00:10.987757   16974 addons.go:237] Setting addon storage-provisioner-rancher=true in "addons-173367"
	I0103 19:00:10.987803   16974 host.go:66] Checking if "addons-173367" exists ...
	I0103 19:00:10.988312   16974 cli_runner.go:164] Run: docker container inspect addons-173367 --format={{.State.Status}}
	I0103 19:00:11.004120   16974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/addons-173367/id_rsa Username:docker}
	I0103 19:00:11.022713   16974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/addons-173367/id_rsa Username:docker}
	I0103 19:00:11.024194   16974 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 19:00:11.024213   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 19:00:11.024265   16974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-173367
	I0103 19:00:11.027576   16974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/addons-173367/id_rsa Username:docker}
	I0103 19:00:11.039987   16974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/addons-173367/id_rsa Username:docker}
	I0103 19:00:11.047601   16974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/addons-173367/id_rsa Username:docker}
	I0103 19:00:11.050258   16974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/addons-173367/id_rsa Username:docker}
	I0103 19:00:11.050258   16974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/addons-173367/id_rsa Username:docker}
	I0103 19:00:11.052201   16974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/addons-173367/id_rsa Username:docker}
	I0103 19:00:11.053260   16974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/addons-173367/id_rsa Username:docker}
	I0103 19:00:11.055234   16974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/addons-173367/id_rsa Username:docker}
	I0103 19:00:11.065413   16974 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0103 19:00:11.061258   16974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/addons-173367/id_rsa Username:docker}
	I0103 19:00:11.066087   16974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/addons-173367/id_rsa Username:docker}
	I0103 19:00:11.068758   16974 out.go:177]   - Using image docker.io/busybox:stable
	I0103 19:00:11.070329   16974 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0103 19:00:11.070344   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0103 19:00:11.070383   16974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-173367
	W0103 19:00:11.080684   16974 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0103 19:00:11.080720   16974 retry.go:31] will retry after 196.099931ms: ssh: handshake failed: EOF
	I0103 19:00:11.090486   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0103 19:00:11.109976   16974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/addons-173367/id_rsa Username:docker}
	W0103 19:00:11.175382   16974 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0103 19:00:11.175410   16974 retry.go:31] will retry after 295.273819ms: ssh: handshake failed: EOF
	I0103 19:00:11.375673   16974 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0103 19:00:11.376029   16974 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0103 19:00:11.376092   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0103 19:00:11.381086   16974 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0103 19:00:11.381112   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0103 19:00:11.388080   16974 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-173367" context rescaled to 1 replicas
	I0103 19:00:11.388121   16974 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 19:00:11.391546   16974 out.go:177] * Verifying Kubernetes components...
	I0103 19:00:11.393015   16974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:00:11.485096   16974 addons.go:429] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0103 19:00:11.485175   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0103 19:00:11.575314   16974 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0103 19:00:11.575340   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0103 19:00:11.576208   16974 addons.go:429] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0103 19:00:11.576228   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0103 19:00:11.578088   16974 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0103 19:00:11.578109   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0103 19:00:11.596584   16974 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0103 19:00:11.596655   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0103 19:00:11.597188   16974 addons.go:429] installing /etc/kubernetes/addons/registry-svc.yaml
	I0103 19:00:11.597210   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0103 19:00:11.674693   16974 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0103 19:00:11.674856   16974 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 19:00:11.676069   16974 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0103 19:00:11.679555   16974 addons.go:429] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0103 19:00:11.679607   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0103 19:00:11.682977   16974 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0103 19:00:11.690219   16974 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 19:00:11.690249   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0103 19:00:11.693044   16974 addons.go:429] installing /etc/kubernetes/addons/ig-role.yaml
	I0103 19:00:11.693066   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0103 19:00:11.783205   16974 addons.go:429] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0103 19:00:11.783285   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0103 19:00:11.784514   16974 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 19:00:11.790347   16974 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0103 19:00:11.798441   16974 addons.go:429] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0103 19:00:11.798522   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0103 19:00:11.876579   16974 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0103 19:00:11.876991   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0103 19:00:11.887220   16974 addons.go:429] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0103 19:00:11.887244   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0103 19:00:11.993110   16974 addons.go:429] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0103 19:00:11.993189   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0103 19:00:12.075997   16974 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 19:00:12.083100   16974 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0103 19:00:12.176148   16974 addons.go:429] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0103 19:00:12.176176   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0103 19:00:12.178173   16974 addons.go:429] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0103 19:00:12.178196   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0103 19:00:12.275560   16974 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0103 19:00:12.374768   16974 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0103 19:00:12.374861   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0103 19:00:12.380291   16974 addons.go:429] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0103 19:00:12.380363   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0103 19:00:12.578165   16974 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0103 19:00:12.578201   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0103 19:00:12.692810   16974 addons.go:429] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0103 19:00:12.692839   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0103 19:00:12.790765   16974 addons.go:429] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0103 19:00:12.790793   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0103 19:00:12.794102   16974 addons.go:429] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0103 19:00:12.794128   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0103 19:00:13.179999   16974 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0103 19:00:13.180027   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0103 19:00:13.186570   16974 addons.go:429] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0103 19:00:13.186597   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0103 19:00:13.277266   16974 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0103 19:00:13.277294   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0103 19:00:13.279730   16974 addons.go:429] installing /etc/kubernetes/addons/ig-crd.yaml
	I0103 19:00:13.279753   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0103 19:00:13.490054   16974 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0103 19:00:13.697677   16974 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0103 19:00:13.777051   16974 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.686527948s)
	I0103 19:00:13.777156   16974 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0103 19:00:13.779645   16974 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0103 19:00:13.779678   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0103 19:00:13.877244   16974 addons.go:429] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0103 19:00:13.877272   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0103 19:00:14.275869   16974 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0103 19:00:14.275899   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0103 19:00:14.488339   16974 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0103 19:00:14.488417   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0103 19:00:14.576842   16974 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0103 19:00:15.080645   16974 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0103 19:00:15.080734   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0103 19:00:15.280281   16974 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.887190252s)
	I0103 19:00:15.281253   16974 node_ready.go:35] waiting up to 6m0s for node "addons-173367" to be "Ready" ...
	I0103 19:00:15.281481   16974 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.905727512s)
	I0103 19:00:15.392679   16974 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0103 19:00:15.687123   16974 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.012330276s)
	I0103 19:00:15.687322   16974 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.011125035s)
	I0103 19:00:16.095340   16974 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.420433129s)
	I0103 19:00:17.286506   16974 node_ready.go:58] node "addons-173367" has status "Ready":"False"
	I0103 19:00:17.595815   16974 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.912770889s)
	I0103 19:00:17.595850   16974 addons.go:473] Verifying addon ingress=true in "addons-173367"
	I0103 19:00:17.595870   16974 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.805235041s)
	I0103 19:00:17.595824   16974 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.811276485s)
	I0103 19:00:17.598661   16974 out.go:177] * Verifying ingress addon...
	I0103 19:00:17.596033   16974 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.519944448s)
	I0103 19:00:17.596099   16974 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.512903315s)
	I0103 19:00:17.596140   16974 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.320480123s)
	I0103 19:00:17.596182   16974 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.106092738s)
	I0103 19:00:17.596295   16974 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.898584509s)
	I0103 19:00:17.596364   16974 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.019487239s)
	I0103 19:00:17.600317   16974 addons.go:473] Verifying addon metrics-server=true in "addons-173367"
	I0103 19:00:17.602016   16974 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-173367 service yakd-dashboard -n yakd-dashboard
	
	
	W0103 19:00:17.600379   16974 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0103 19:00:17.600402   16974 addons.go:473] Verifying addon registry=true in "addons-173367"
	I0103 19:00:17.601242   16974 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0103 19:00:17.603686   16974 retry.go:31] will retry after 133.681892ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0103 19:00:17.606603   16974 out.go:177] * Verifying registry addon...
	I0103 19:00:17.609204   16974 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0103 19:00:17.610868   16974 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0103 19:00:17.610887   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:17.615873   16974 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0103 19:00:17.615886   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:17.738012   16974 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0103 19:00:17.796168   16974 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0103 19:00:17.796234   16974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-173367
	I0103 19:00:17.819775   16974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/addons-173367/id_rsa Username:docker}
	I0103 19:00:17.994669   16974 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0103 19:00:18.082875   16974 addons.go:237] Setting addon gcp-auth=true in "addons-173367"
	I0103 19:00:18.082929   16974 host.go:66] Checking if "addons-173367" exists ...
	I0103 19:00:18.083456   16974 cli_runner.go:164] Run: docker container inspect addons-173367 --format={{.State.Status}}
	I0103 19:00:18.102433   16974 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0103 19:00:18.102492   16974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-173367
	I0103 19:00:18.107624   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:18.112750   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:18.121648   16974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/addons-173367/id_rsa Username:docker}
	I0103 19:00:18.584433   16974 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.191670117s)
	I0103 19:00:18.584520   16974 addons.go:473] Verifying addon csi-hostpath-driver=true in "addons-173367"
	I0103 19:00:18.586359   16974 out.go:177] * Verifying csi-hostpath-driver addon...
	I0103 19:00:18.588871   16974 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0103 19:00:18.595924   16974 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0103 19:00:18.595943   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:18.607769   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:18.613092   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:18.887409   16974 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.1493488s)
	I0103 19:00:18.890394   16974 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0103 19:00:18.892231   16974 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0103 19:00:18.893750   16974 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0103 19:00:18.893765   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0103 19:00:18.910063   16974 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0103 19:00:18.910087   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0103 19:00:18.925783   16974 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0103 19:00:18.925802   16974 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0103 19:00:18.940820   16974 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0103 19:00:19.093255   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:19.107567   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:19.113660   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:19.322018   16974 addons.go:473] Verifying addon gcp-auth=true in "addons-173367"
	I0103 19:00:19.323746   16974 out.go:177] * Verifying gcp-auth addon...
	I0103 19:00:19.326542   16974 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0103 19:00:19.328797   16974 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0103 19:00:19.328811   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:19.593875   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:19.607744   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:19.613321   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:19.784416   16974 node_ready.go:58] node "addons-173367" has status "Ready":"False"
	I0103 19:00:19.829985   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:20.093070   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:20.107042   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:20.112513   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:20.329656   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:20.593366   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:20.607344   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:20.613230   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:20.830377   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:21.093040   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:21.107780   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:21.113648   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:21.330047   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:21.593567   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:21.607889   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:21.613667   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:21.830007   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:22.092693   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:22.109764   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:22.112711   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:22.284674   16974 node_ready.go:58] node "addons-173367" has status "Ready":"False"
	I0103 19:00:22.330451   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:22.593017   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:22.607732   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:22.613308   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:22.829303   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:23.092916   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:23.107846   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:23.112234   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:23.329163   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:23.592713   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:23.607733   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:23.613574   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:23.829685   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:24.093203   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:24.107104   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:24.112605   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:24.329874   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:24.592527   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:24.607430   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:24.613155   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:24.785241   16974 node_ready.go:58] node "addons-173367" has status "Ready":"False"
	I0103 19:00:24.829663   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:25.093193   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:25.107061   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:25.112603   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:25.329714   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:25.593333   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:25.607784   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:25.612937   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:25.830412   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:26.093390   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:26.107893   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:26.113511   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:26.330029   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:26.592957   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:26.608358   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:26.613324   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:26.830294   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:27.092767   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:27.107752   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:27.113550   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:27.284857   16974 node_ready.go:58] node "addons-173367" has status "Ready":"False"
	I0103 19:00:27.329824   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:27.593299   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:27.607861   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:27.614257   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:27.830258   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:28.093747   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:28.108668   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:28.113386   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:28.329710   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:28.593984   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:28.607840   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:28.612385   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:28.830009   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:29.094308   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:29.107258   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:29.112904   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:29.330021   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:29.592588   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:29.607949   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:29.613677   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:29.784471   16974 node_ready.go:58] node "addons-173367" has status "Ready":"False"
	I0103 19:00:29.830411   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:30.093974   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:30.108354   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:30.113023   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:30.329384   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:30.593400   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:30.607549   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:30.614060   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:30.829463   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:31.093303   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:31.107206   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:31.112912   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:31.330363   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:31.593047   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:31.608216   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:31.612992   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:31.784719   16974 node_ready.go:58] node "addons-173367" has status "Ready":"False"
	I0103 19:00:31.830448   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:32.093314   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:32.107127   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:32.112653   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:32.329995   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:32.592640   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:32.607909   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:32.612758   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:32.829965   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:33.093626   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:33.107677   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:33.113322   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:33.329786   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:33.593521   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:33.607392   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:33.612882   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:33.784769   16974 node_ready.go:58] node "addons-173367" has status "Ready":"False"
	I0103 19:00:33.830510   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:34.093466   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:34.107351   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:34.113014   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:34.330397   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:34.592976   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:34.608253   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:34.613485   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:34.830385   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:35.093666   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:35.107522   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:35.113023   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:35.329332   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:35.593467   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:35.607327   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:35.613190   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:35.784936   16974 node_ready.go:58] node "addons-173367" has status "Ready":"False"
	I0103 19:00:35.829363   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:36.094619   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:36.111621   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:36.114948   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:36.330395   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:36.593838   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:36.607909   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:36.612116   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:36.830401   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:37.093120   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:37.106915   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:37.112369   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:37.329332   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:37.592816   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:37.607667   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:37.613265   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:37.829509   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:38.092952   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:38.107889   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:38.113044   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:38.284640   16974 node_ready.go:58] node "addons-173367" has status "Ready":"False"
	I0103 19:00:38.330214   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:38.593099   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:38.607843   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:38.612419   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:38.829390   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:39.093146   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:39.107460   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:39.112417   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:39.329464   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:39.592556   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:39.607285   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:39.612794   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:39.829511   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:40.092956   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:40.108021   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:40.112433   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:40.333167   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:40.593486   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:40.607520   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:40.612690   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:40.784332   16974 node_ready.go:58] node "addons-173367" has status "Ready":"False"
	I0103 19:00:40.829776   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:41.092702   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:41.107650   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:41.113086   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:41.330085   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:41.592646   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:41.607694   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:41.613447   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:41.829480   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:42.093138   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:42.107155   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:42.112375   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:42.329566   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:42.593377   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:42.607132   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:42.612946   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:42.784611   16974 node_ready.go:58] node "addons-173367" has status "Ready":"False"
	I0103 19:00:42.830056   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:43.092741   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:43.107553   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:43.112916   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:43.284286   16974 node_ready.go:49] node "addons-173367" has status "Ready":"True"
	I0103 19:00:43.284309   16974 node_ready.go:38] duration metric: took 28.003024938s waiting for node "addons-173367" to be "Ready" ...
	I0103 19:00:43.284320   16974 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:00:43.292194   16974 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-66s5s" in "kube-system" namespace to be "Ready" ...
	I0103 19:00:43.329379   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:43.594516   16974 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0103 19:00:43.594536   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:43.608038   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:43.613312   16974 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0103 19:00:43.613333   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:43.830398   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:44.093401   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:44.109096   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:44.115905   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:44.377289   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:44.594716   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:44.608381   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:44.613535   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:44.829998   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:45.093453   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:45.108496   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:45.113551   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:45.296447   16974 pod_ready.go:92] pod "coredns-5dd5756b68-66s5s" in "kube-system" namespace has status "Ready":"True"
	I0103 19:00:45.296469   16974 pod_ready.go:81] duration metric: took 2.004250505s waiting for pod "coredns-5dd5756b68-66s5s" in "kube-system" namespace to be "Ready" ...
	I0103 19:00:45.296487   16974 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-173367" in "kube-system" namespace to be "Ready" ...
	I0103 19:00:45.300129   16974 pod_ready.go:92] pod "etcd-addons-173367" in "kube-system" namespace has status "Ready":"True"
	I0103 19:00:45.300147   16974 pod_ready.go:81] duration metric: took 3.654171ms waiting for pod "etcd-addons-173367" in "kube-system" namespace to be "Ready" ...
	I0103 19:00:45.300157   16974 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-173367" in "kube-system" namespace to be "Ready" ...
	I0103 19:00:45.306510   16974 pod_ready.go:92] pod "kube-apiserver-addons-173367" in "kube-system" namespace has status "Ready":"True"
	I0103 19:00:45.306528   16974 pod_ready.go:81] duration metric: took 6.366126ms waiting for pod "kube-apiserver-addons-173367" in "kube-system" namespace to be "Ready" ...
	I0103 19:00:45.306536   16974 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-173367" in "kube-system" namespace to be "Ready" ...
	I0103 19:00:45.310509   16974 pod_ready.go:92] pod "kube-controller-manager-addons-173367" in "kube-system" namespace has status "Ready":"True"
	I0103 19:00:45.310527   16974 pod_ready.go:81] duration metric: took 3.98478ms waiting for pod "kube-controller-manager-addons-173367" in "kube-system" namespace to be "Ready" ...
	I0103 19:00:45.310536   16974 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z4qtr" in "kube-system" namespace to be "Ready" ...
	I0103 19:00:45.313955   16974 pod_ready.go:92] pod "kube-proxy-z4qtr" in "kube-system" namespace has status "Ready":"True"
	I0103 19:00:45.313969   16974 pod_ready.go:81] duration metric: took 3.426969ms waiting for pod "kube-proxy-z4qtr" in "kube-system" namespace to be "Ready" ...
	I0103 19:00:45.313976   16974 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-173367" in "kube-system" namespace to be "Ready" ...
	I0103 19:00:45.328829   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:45.593538   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:45.606929   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:45.612555   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:45.695831   16974 pod_ready.go:92] pod "kube-scheduler-addons-173367" in "kube-system" namespace has status "Ready":"True"
	I0103 19:00:45.695855   16974 pod_ready.go:81] duration metric: took 381.872776ms waiting for pod "kube-scheduler-addons-173367" in "kube-system" namespace to be "Ready" ...
	I0103 19:00:45.695864   16974 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-2gr28" in "kube-system" namespace to be "Ready" ...
	I0103 19:00:45.829624   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:46.094360   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:46.107347   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:46.112751   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:46.329325   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:46.593984   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:46.607612   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:46.613477   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:46.829464   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:47.093793   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:47.107040   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:47.113002   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:47.329683   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:47.594389   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:47.608226   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:47.612906   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:47.701308   16974 pod_ready.go:102] pod "metrics-server-7c66d45ddc-2gr28" in "kube-system" namespace has status "Ready":"False"
	I0103 19:00:47.829948   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:48.094127   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:48.107571   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:48.113394   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:48.377772   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:48.594722   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:48.608273   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:48.613682   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:48.830414   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:49.093961   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:49.107275   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:49.112905   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:49.329617   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:49.594103   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:49.607611   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:49.613403   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:49.829569   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:50.094202   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:50.107474   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:50.113102   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:50.201640   16974 pod_ready.go:102] pod "metrics-server-7c66d45ddc-2gr28" in "kube-system" namespace has status "Ready":"False"
	I0103 19:00:50.330275   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:50.593689   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:50.607215   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:50.613133   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:50.829589   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:51.095506   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:51.108051   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:51.113957   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:51.330504   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:51.595192   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:51.608547   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:51.613798   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:51.830007   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:52.098295   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:52.108566   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:52.114043   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:52.202359   16974 pod_ready.go:102] pod "metrics-server-7c66d45ddc-2gr28" in "kube-system" namespace has status "Ready":"False"
	I0103 19:00:52.330390   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:52.593594   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:52.607290   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:52.613687   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:52.830364   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:53.094152   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:53.107715   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:53.113864   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:53.330109   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:53.593335   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:53.607702   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:53.613788   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:53.830759   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:54.094496   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:54.108124   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:54.113171   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:54.330046   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:54.594386   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:54.607953   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:54.614899   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:54.701762   16974 pod_ready.go:102] pod "metrics-server-7c66d45ddc-2gr28" in "kube-system" namespace has status "Ready":"False"
	I0103 19:00:54.829505   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:55.094023   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:55.108338   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:55.113278   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:55.330302   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:55.594670   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:55.608556   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:55.613569   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:55.829390   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:56.093851   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:56.107437   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:56.115424   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:56.329548   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:56.593593   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:56.606970   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:56.612737   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:56.829831   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:57.094260   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:57.107739   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:57.113579   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:57.201161   16974 pod_ready.go:102] pod "metrics-server-7c66d45ddc-2gr28" in "kube-system" namespace has status "Ready":"False"
	I0103 19:00:57.330208   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:57.593882   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:57.608138   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:57.613653   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:57.877787   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:58.095468   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:58.108533   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:58.178033   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:58.330154   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:58.593754   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:58.608152   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:58.613982   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:58.829728   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:59.094318   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:59.107731   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:59.114223   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:59.202312   16974 pod_ready.go:102] pod "metrics-server-7c66d45ddc-2gr28" in "kube-system" namespace has status "Ready":"False"
	I0103 19:00:59.329948   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:00:59.594833   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:00:59.606941   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:00:59.612722   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:00:59.830477   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:00.094171   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:00.108215   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:00.113520   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:00.329586   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:00.594332   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:00.607780   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:00.614108   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:00.829925   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:01.095934   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:01.108175   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:01.177643   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:01.330818   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:01.595551   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:01.608303   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:01.613379   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:01.701800   16974 pod_ready.go:102] pod "metrics-server-7c66d45ddc-2gr28" in "kube-system" namespace has status "Ready":"False"
	I0103 19:01:01.875802   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:02.095074   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:02.108747   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:02.118695   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:02.331040   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:02.594396   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:02.608710   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:02.613747   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:02.830048   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:03.094886   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:03.108028   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:03.113241   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:03.330254   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:03.594501   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:03.608472   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:03.613851   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:03.702595   16974 pod_ready.go:102] pod "metrics-server-7c66d45ddc-2gr28" in "kube-system" namespace has status "Ready":"False"
	I0103 19:01:03.831393   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:04.095516   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:04.108336   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:04.113489   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:04.330357   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:04.594583   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:04.609228   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:04.612657   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:04.830601   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:05.094833   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:05.108431   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:05.113626   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:05.330405   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:05.594627   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:05.608437   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:05.613602   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:05.830580   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:06.094229   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:06.108369   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:06.112930   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:06.202077   16974 pod_ready.go:102] pod "metrics-server-7c66d45ddc-2gr28" in "kube-system" namespace has status "Ready":"False"
	I0103 19:01:06.329707   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:06.594166   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:06.607597   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:06.613376   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:06.832108   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:07.094584   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:07.107767   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:07.113268   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:07.330533   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:07.594265   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:07.608059   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:07.613282   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:07.702110   16974 pod_ready.go:92] pod "metrics-server-7c66d45ddc-2gr28" in "kube-system" namespace has status "Ready":"True"
	I0103 19:01:07.702148   16974 pod_ready.go:81] duration metric: took 22.006263248s waiting for pod "metrics-server-7c66d45ddc-2gr28" in "kube-system" namespace to be "Ready" ...
	I0103 19:01:07.702158   16974 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-txfsz" in "kube-system" namespace to be "Ready" ...
	I0103 19:01:07.830648   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:08.094095   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:08.107452   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:08.113187   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:08.329726   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:08.594582   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:08.607399   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:08.613382   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:08.829965   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:09.095374   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:09.108915   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:09.114234   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:09.330732   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:09.594220   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:09.607990   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:09.614144   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:09.707425   16974 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-txfsz" in "kube-system" namespace has status "Ready":"False"
	I0103 19:01:09.830883   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:10.095808   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:10.107953   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:10.113813   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:10.330152   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:10.596601   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:10.608019   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:10.613381   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:10.829726   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:11.094588   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:11.109581   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:11.114335   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:11.330712   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:11.594288   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:11.608311   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:11.613039   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:11.709620   16974 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-txfsz" in "kube-system" namespace has status "Ready":"False"
	I0103 19:01:11.829921   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:12.094307   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:12.107666   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:12.113441   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:12.329922   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:12.594266   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:12.607803   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:12.613492   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:12.829537   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:13.094334   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:13.107891   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:13.113611   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:13.331796   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:13.594662   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:13.607938   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:13.612824   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:13.829507   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:14.094056   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:14.107631   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:14.113578   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:14.207550   16974 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-txfsz" in "kube-system" namespace has status "Ready":"False"
	I0103 19:01:14.330027   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:14.593872   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:14.607437   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:14.613162   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:14.829938   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:15.095112   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:15.107950   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:15.113547   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:15.377111   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:15.595485   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:15.608559   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:15.614014   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:15.877722   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:16.095334   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:16.108653   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:16.114260   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:16.207833   16974 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-txfsz" in "kube-system" namespace has status "Ready":"False"
	I0103 19:01:16.330332   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:16.594203   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:16.607606   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:16.613993   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:16.830444   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:17.094341   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:17.108098   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:17.114117   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:17.330000   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:17.594732   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:17.607305   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:17.613735   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:17.830103   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:18.093539   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:18.108304   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:18.113444   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:18.208058   16974 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-txfsz" in "kube-system" namespace has status "Ready":"False"
	I0103 19:01:18.329962   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:18.593944   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:18.607659   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:18.613819   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:18.830264   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:19.093706   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:19.107312   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:19.113077   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:19.330588   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:19.594721   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:19.607385   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:19.613597   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:19.830224   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:20.094537   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:20.107995   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:20.113673   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:20.330517   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:20.593562   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:20.608062   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:20.612907   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:20.707637   16974 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-txfsz" in "kube-system" namespace has status "Ready":"False"
	I0103 19:01:20.830492   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:21.094507   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:21.107914   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:21.113752   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:21.378007   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:21.595637   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:21.680620   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:21.681558   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:21.882903   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:22.094467   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:22.177551   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:22.178496   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:22.376211   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:22.609710   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:22.611491   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:22.685408   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:22.708531   16974 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-txfsz" in "kube-system" namespace has status "Ready":"False"
	I0103 19:01:22.830321   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:23.094447   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:23.108679   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:23.114034   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:23.375957   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:23.594350   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:23.608237   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:23.613265   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:23.829724   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:24.095319   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:24.108035   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:24.114738   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:24.375836   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:24.594500   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:24.607141   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:24.613233   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:24.751215   16974 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-txfsz" in "kube-system" namespace has status "Ready":"False"
	I0103 19:01:24.830632   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:25.094521   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:25.108269   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:25.113898   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:25.329881   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:25.594774   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:25.607718   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:25.613839   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:25.829852   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:26.094418   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:26.108394   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:26.113798   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:26.330426   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:26.594505   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:26.608229   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:26.613417   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:26.829515   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:27.094519   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:27.107995   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:27.112923   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:27.207812   16974 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-txfsz" in "kube-system" namespace has status "Ready":"False"
	I0103 19:01:27.329612   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:27.594304   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:27.607902   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:27.613654   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:27.707465   16974 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-txfsz" in "kube-system" namespace has status "Ready":"True"
	I0103 19:01:27.707489   16974 pod_ready.go:81] duration metric: took 20.005325027s waiting for pod "nvidia-device-plugin-daemonset-txfsz" in "kube-system" namespace to be "Ready" ...
	I0103 19:01:27.707507   16974 pod_ready.go:38] duration metric: took 44.42316336s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:01:27.707524   16974 api_server.go:52] waiting for apiserver process to appear ...
	I0103 19:01:27.707559   16974 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 19:01:27.707613   16974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 19:01:27.740355   16974 cri.go:89] found id: "c1ee91e1b445a341d54714589225cd1ec7fe4ac052b86558121eb62b231c8edc"
	I0103 19:01:27.740380   16974 cri.go:89] found id: ""
	I0103 19:01:27.740387   16974 logs.go:284] 1 containers: [c1ee91e1b445a341d54714589225cd1ec7fe4ac052b86558121eb62b231c8edc]
	I0103 19:01:27.740440   16974 ssh_runner.go:195] Run: which crictl
	I0103 19:01:27.743743   16974 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 19:01:27.743802   16974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 19:01:27.775995   16974 cri.go:89] found id: "1d944c61f85a15e40e9eb15ae47027fd97f13460becca2d14667b29449c01d6a"
	I0103 19:01:27.776021   16974 cri.go:89] found id: ""
	I0103 19:01:27.776030   16974 logs.go:284] 1 containers: [1d944c61f85a15e40e9eb15ae47027fd97f13460becca2d14667b29449c01d6a]
	I0103 19:01:27.776081   16974 ssh_runner.go:195] Run: which crictl
	I0103 19:01:27.779189   16974 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 19:01:27.779241   16974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 19:01:27.811870   16974 cri.go:89] found id: "1b05cc2c991cb313d0db341bac528cba927885f5305cdc0a92638745792ec34f"
	I0103 19:01:27.811894   16974 cri.go:89] found id: ""
	I0103 19:01:27.811901   16974 logs.go:284] 1 containers: [1b05cc2c991cb313d0db341bac528cba927885f5305cdc0a92638745792ec34f]
	I0103 19:01:27.811939   16974 ssh_runner.go:195] Run: which crictl
	I0103 19:01:27.815131   16974 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 19:01:27.815189   16974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 19:01:27.829945   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:27.846814   16974 cri.go:89] found id: "d115c6946c19fcb2c40b6996249403555c3790ed810704d907bc9ba40a9b7fc5"
	I0103 19:01:27.846834   16974 cri.go:89] found id: ""
	I0103 19:01:27.846841   16974 logs.go:284] 1 containers: [d115c6946c19fcb2c40b6996249403555c3790ed810704d907bc9ba40a9b7fc5]
	I0103 19:01:27.846898   16974 ssh_runner.go:195] Run: which crictl
	I0103 19:01:27.850098   16974 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 19:01:27.850166   16974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 19:01:27.884207   16974 cri.go:89] found id: "196b7c0ee29a475b82ab0ee6219bc80de5f49399692a75e8d8d3a0c5610910c7"
	I0103 19:01:27.884231   16974 cri.go:89] found id: ""
	I0103 19:01:27.884240   16974 logs.go:284] 1 containers: [196b7c0ee29a475b82ab0ee6219bc80de5f49399692a75e8d8d3a0c5610910c7]
	I0103 19:01:27.884290   16974 ssh_runner.go:195] Run: which crictl
	I0103 19:01:27.887623   16974 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 19:01:27.887673   16974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 19:01:27.918475   16974 cri.go:89] found id: "445163bb913c1050a62ee894f92fd19a75126c6a709e3c17f8fb23edc65876ae"
	I0103 19:01:27.918500   16974 cri.go:89] found id: ""
	I0103 19:01:27.918510   16974 logs.go:284] 1 containers: [445163bb913c1050a62ee894f92fd19a75126c6a709e3c17f8fb23edc65876ae]
	I0103 19:01:27.918565   16974 ssh_runner.go:195] Run: which crictl
	I0103 19:01:27.921630   16974 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 19:01:27.921682   16974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 19:01:27.952012   16974 cri.go:89] found id: "60d7d38bdc96a69acb171cbb213669b0abdfe53f172781ba46d62d9048ff8f9d"
	I0103 19:01:27.952037   16974 cri.go:89] found id: ""
	I0103 19:01:27.952048   16974 logs.go:284] 1 containers: [60d7d38bdc96a69acb171cbb213669b0abdfe53f172781ba46d62d9048ff8f9d]
	I0103 19:01:27.952100   16974 ssh_runner.go:195] Run: which crictl
	I0103 19:01:27.955163   16974 logs.go:123] Gathering logs for describe nodes ...
	I0103 19:01:27.955181   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 19:01:28.054223   16974 logs.go:123] Gathering logs for kube-apiserver [c1ee91e1b445a341d54714589225cd1ec7fe4ac052b86558121eb62b231c8edc] ...
	I0103 19:01:28.054250   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1ee91e1b445a341d54714589225cd1ec7fe4ac052b86558121eb62b231c8edc"
	I0103 19:01:28.093824   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:28.097201   16974 logs.go:123] Gathering logs for coredns [1b05cc2c991cb313d0db341bac528cba927885f5305cdc0a92638745792ec34f] ...
	I0103 19:01:28.097225   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b05cc2c991cb313d0db341bac528cba927885f5305cdc0a92638745792ec34f"
	I0103 19:01:28.108568   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:28.114425   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:28.135488   16974 logs.go:123] Gathering logs for kube-proxy [196b7c0ee29a475b82ab0ee6219bc80de5f49399692a75e8d8d3a0c5610910c7] ...
	I0103 19:01:28.135510   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 196b7c0ee29a475b82ab0ee6219bc80de5f49399692a75e8d8d3a0c5610910c7"
	I0103 19:01:28.179593   16974 logs.go:123] Gathering logs for container status ...
	I0103 19:01:28.179621   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 19:01:28.217877   16974 logs.go:123] Gathering logs for kubelet ...
	I0103 19:01:28.217905   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 19:01:28.293404   16974 logs.go:123] Gathering logs for dmesg ...
	I0103 19:01:28.293436   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 19:01:28.304415   16974 logs.go:123] Gathering logs for etcd [1d944c61f85a15e40e9eb15ae47027fd97f13460becca2d14667b29449c01d6a] ...
	I0103 19:01:28.304438   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d944c61f85a15e40e9eb15ae47027fd97f13460becca2d14667b29449c01d6a"
	I0103 19:01:28.330591   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:28.347214   16974 logs.go:123] Gathering logs for kube-scheduler [d115c6946c19fcb2c40b6996249403555c3790ed810704d907bc9ba40a9b7fc5] ...
	I0103 19:01:28.347240   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d115c6946c19fcb2c40b6996249403555c3790ed810704d907bc9ba40a9b7fc5"
	I0103 19:01:28.386335   16974 logs.go:123] Gathering logs for kube-controller-manager [445163bb913c1050a62ee894f92fd19a75126c6a709e3c17f8fb23edc65876ae] ...
	I0103 19:01:28.386362   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 445163bb913c1050a62ee894f92fd19a75126c6a709e3c17f8fb23edc65876ae"
	I0103 19:01:28.439196   16974 logs.go:123] Gathering logs for kindnet [60d7d38bdc96a69acb171cbb213669b0abdfe53f172781ba46d62d9048ff8f9d] ...
	I0103 19:01:28.439226   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60d7d38bdc96a69acb171cbb213669b0abdfe53f172781ba46d62d9048ff8f9d"
	I0103 19:01:28.471873   16974 logs.go:123] Gathering logs for CRI-O ...
	I0103 19:01:28.471900   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 19:01:28.594701   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:28.607186   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:28.613223   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:28.830585   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:29.095052   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:29.108379   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:29.113920   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:29.330537   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:29.595433   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:29.608243   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:29.614117   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:29.830179   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:30.135835   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:30.135864   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:30.137557   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:30.329935   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:30.595575   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:30.608130   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:30.614391   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:30.829948   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:31.044007   16974 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 19:01:31.056998   16974 api_server.go:72] duration metric: took 1m19.668825336s to wait for apiserver process to appear ...
	I0103 19:01:31.057030   16974 api_server.go:88] waiting for apiserver healthz status ...
	I0103 19:01:31.057067   16974 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 19:01:31.057121   16974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 19:01:31.093365   16974 cri.go:89] found id: "c1ee91e1b445a341d54714589225cd1ec7fe4ac052b86558121eb62b231c8edc"
	I0103 19:01:31.093391   16974 cri.go:89] found id: ""
	I0103 19:01:31.093400   16974 logs.go:284] 1 containers: [c1ee91e1b445a341d54714589225cd1ec7fe4ac052b86558121eb62b231c8edc]
	I0103 19:01:31.093450   16974 ssh_runner.go:195] Run: which crictl
	I0103 19:01:31.096223   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:31.097420   16974 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 19:01:31.097483   16974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 19:01:31.108093   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:31.113434   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:31.132413   16974 cri.go:89] found id: "1d944c61f85a15e40e9eb15ae47027fd97f13460becca2d14667b29449c01d6a"
	I0103 19:01:31.132436   16974 cri.go:89] found id: ""
	I0103 19:01:31.132445   16974 logs.go:284] 1 containers: [1d944c61f85a15e40e9eb15ae47027fd97f13460becca2d14667b29449c01d6a]
	I0103 19:01:31.132494   16974 ssh_runner.go:195] Run: which crictl
	I0103 19:01:31.176262   16974 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 19:01:31.176333   16974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 19:01:31.213098   16974 cri.go:89] found id: "1b05cc2c991cb313d0db341bac528cba927885f5305cdc0a92638745792ec34f"
	I0103 19:01:31.213121   16974 cri.go:89] found id: ""
	I0103 19:01:31.213129   16974 logs.go:284] 1 containers: [1b05cc2c991cb313d0db341bac528cba927885f5305cdc0a92638745792ec34f]
	I0103 19:01:31.213182   16974 ssh_runner.go:195] Run: which crictl
	I0103 19:01:31.216708   16974 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 19:01:31.216768   16974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 19:01:31.307513   16974 cri.go:89] found id: "d115c6946c19fcb2c40b6996249403555c3790ed810704d907bc9ba40a9b7fc5"
	I0103 19:01:31.307538   16974 cri.go:89] found id: ""
	I0103 19:01:31.307545   16974 logs.go:284] 1 containers: [d115c6946c19fcb2c40b6996249403555c3790ed810704d907bc9ba40a9b7fc5]
	I0103 19:01:31.307600   16974 ssh_runner.go:195] Run: which crictl
	I0103 19:01:31.310845   16974 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 19:01:31.310929   16974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 19:01:31.330288   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:31.387534   16974 cri.go:89] found id: "196b7c0ee29a475b82ab0ee6219bc80de5f49399692a75e8d8d3a0c5610910c7"
	I0103 19:01:31.387560   16974 cri.go:89] found id: ""
	I0103 19:01:31.387569   16974 logs.go:284] 1 containers: [196b7c0ee29a475b82ab0ee6219bc80de5f49399692a75e8d8d3a0c5610910c7]
	I0103 19:01:31.387620   16974 ssh_runner.go:195] Run: which crictl
	I0103 19:01:31.391071   16974 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 19:01:31.391139   16974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 19:01:31.424469   16974 cri.go:89] found id: "445163bb913c1050a62ee894f92fd19a75126c6a709e3c17f8fb23edc65876ae"
	I0103 19:01:31.424496   16974 cri.go:89] found id: ""
	I0103 19:01:31.424506   16974 logs.go:284] 1 containers: [445163bb913c1050a62ee894f92fd19a75126c6a709e3c17f8fb23edc65876ae]
	I0103 19:01:31.424562   16974 ssh_runner.go:195] Run: which crictl
	I0103 19:01:31.427734   16974 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 19:01:31.427790   16974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 19:01:31.489481   16974 cri.go:89] found id: "60d7d38bdc96a69acb171cbb213669b0abdfe53f172781ba46d62d9048ff8f9d"
	I0103 19:01:31.489504   16974 cri.go:89] found id: ""
	I0103 19:01:31.489511   16974 logs.go:284] 1 containers: [60d7d38bdc96a69acb171cbb213669b0abdfe53f172781ba46d62d9048ff8f9d]
	I0103 19:01:31.489561   16974 ssh_runner.go:195] Run: which crictl
	I0103 19:01:31.492792   16974 logs.go:123] Gathering logs for kube-proxy [196b7c0ee29a475b82ab0ee6219bc80de5f49399692a75e8d8d3a0c5610910c7] ...
	I0103 19:01:31.492815   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 196b7c0ee29a475b82ab0ee6219bc80de5f49399692a75e8d8d3a0c5610910c7"
	I0103 19:01:31.530197   16974 logs.go:123] Gathering logs for CRI-O ...
	I0103 19:01:31.530223   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 19:01:31.595080   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:31.608103   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:31.613590   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:31.643192   16974 logs.go:123] Gathering logs for kube-apiserver [c1ee91e1b445a341d54714589225cd1ec7fe4ac052b86558121eb62b231c8edc] ...
	I0103 19:01:31.643228   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1ee91e1b445a341d54714589225cd1ec7fe4ac052b86558121eb62b231c8edc"
	I0103 19:01:31.687793   16974 logs.go:123] Gathering logs for etcd [1d944c61f85a15e40e9eb15ae47027fd97f13460becca2d14667b29449c01d6a] ...
	I0103 19:01:31.687824   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d944c61f85a15e40e9eb15ae47027fd97f13460becca2d14667b29449c01d6a"
	I0103 19:01:31.729132   16974 logs.go:123] Gathering logs for coredns [1b05cc2c991cb313d0db341bac528cba927885f5305cdc0a92638745792ec34f] ...
	I0103 19:01:31.729158   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b05cc2c991cb313d0db341bac528cba927885f5305cdc0a92638745792ec34f"
	I0103 19:01:31.761369   16974 logs.go:123] Gathering logs for kube-scheduler [d115c6946c19fcb2c40b6996249403555c3790ed810704d907bc9ba40a9b7fc5] ...
	I0103 19:01:31.761394   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d115c6946c19fcb2c40b6996249403555c3790ed810704d907bc9ba40a9b7fc5"
	I0103 19:01:31.801390   16974 logs.go:123] Gathering logs for kube-controller-manager [445163bb913c1050a62ee894f92fd19a75126c6a709e3c17f8fb23edc65876ae] ...
	I0103 19:01:31.801418   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 445163bb913c1050a62ee894f92fd19a75126c6a709e3c17f8fb23edc65876ae"
	I0103 19:01:31.830595   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:31.862241   16974 logs.go:123] Gathering logs for kindnet [60d7d38bdc96a69acb171cbb213669b0abdfe53f172781ba46d62d9048ff8f9d] ...
	I0103 19:01:31.862274   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60d7d38bdc96a69acb171cbb213669b0abdfe53f172781ba46d62d9048ff8f9d"
	I0103 19:01:31.908982   16974 logs.go:123] Gathering logs for container status ...
	I0103 19:01:31.909007   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 19:01:31.989249   16974 logs.go:123] Gathering logs for kubelet ...
	I0103 19:01:31.989279   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 19:01:32.065951   16974 logs.go:123] Gathering logs for dmesg ...
	I0103 19:01:32.065987   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 19:01:32.077213   16974 logs.go:123] Gathering logs for describe nodes ...
	I0103 19:01:32.077241   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 19:01:32.093591   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:32.107954   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:32.113606   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:01:32.330017   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:32.594734   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:32.608008   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:32.613034   16974 kapi.go:107] duration metric: took 1m15.003828225s to wait for kubernetes.io/minikube-addons=registry ...
	I0103 19:01:32.829968   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:33.095287   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:33.108400   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:33.329775   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:33.595087   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:33.607307   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:33.881127   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:34.095881   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:34.179279   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:34.377498   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:34.602650   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:34.670965   16974 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0103 19:01:34.678579   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:34.680467   16974 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0103 19:01:34.681824   16974 api_server.go:141] control plane version: v1.28.4
	I0103 19:01:34.681851   16974 api_server.go:131] duration metric: took 3.624813218s to wait for apiserver health ...
	I0103 19:01:34.681862   16974 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 19:01:34.681886   16974 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 19:01:34.681952   16974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 19:01:34.807135   16974 cri.go:89] found id: "c1ee91e1b445a341d54714589225cd1ec7fe4ac052b86558121eb62b231c8edc"
	I0103 19:01:34.807163   16974 cri.go:89] found id: ""
	I0103 19:01:34.807172   16974 logs.go:284] 1 containers: [c1ee91e1b445a341d54714589225cd1ec7fe4ac052b86558121eb62b231c8edc]
	I0103 19:01:34.807219   16974 ssh_runner.go:195] Run: which crictl
	I0103 19:01:34.877331   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:34.877960   16974 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 19:01:34.878023   16974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 19:01:35.095377   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:35.098024   16974 cri.go:89] found id: "1d944c61f85a15e40e9eb15ae47027fd97f13460becca2d14667b29449c01d6a"
	I0103 19:01:35.098042   16974 cri.go:89] found id: ""
	I0103 19:01:35.098052   16974 logs.go:284] 1 containers: [1d944c61f85a15e40e9eb15ae47027fd97f13460becca2d14667b29449c01d6a]
	I0103 19:01:35.098115   16974 ssh_runner.go:195] Run: which crictl
	I0103 19:01:35.178631   16974 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 19:01:35.178756   16974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 19:01:35.182435   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:35.377402   16974 cri.go:89] found id: "1b05cc2c991cb313d0db341bac528cba927885f5305cdc0a92638745792ec34f"
	I0103 19:01:35.377427   16974 cri.go:89] found id: ""
	I0103 19:01:35.377440   16974 logs.go:284] 1 containers: [1b05cc2c991cb313d0db341bac528cba927885f5305cdc0a92638745792ec34f]
	I0103 19:01:35.377493   16974 ssh_runner.go:195] Run: which crictl
	I0103 19:01:35.377715   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:35.381948   16974 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 19:01:35.382012   16974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 19:01:35.579654   16974 cri.go:89] found id: "d115c6946c19fcb2c40b6996249403555c3790ed810704d907bc9ba40a9b7fc5"
	I0103 19:01:35.579733   16974 cri.go:89] found id: ""
	I0103 19:01:35.579752   16974 logs.go:284] 1 containers: [d115c6946c19fcb2c40b6996249403555c3790ed810704d907bc9ba40a9b7fc5]
	I0103 19:01:35.579821   16974 ssh_runner.go:195] Run: which crictl
	I0103 19:01:35.584674   16974 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 19:01:35.584735   16974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 19:01:35.596475   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:35.679544   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:35.778713   16974 cri.go:89] found id: "196b7c0ee29a475b82ab0ee6219bc80de5f49399692a75e8d8d3a0c5610910c7"
	I0103 19:01:35.778736   16974 cri.go:89] found id: ""
	I0103 19:01:35.778746   16974 logs.go:284] 1 containers: [196b7c0ee29a475b82ab0ee6219bc80de5f49399692a75e8d8d3a0c5610910c7]
	I0103 19:01:35.778796   16974 ssh_runner.go:195] Run: which crictl
	I0103 19:01:35.782691   16974 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 19:01:35.782750   16974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 19:01:35.877685   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:35.885885   16974 cri.go:89] found id: "445163bb913c1050a62ee894f92fd19a75126c6a709e3c17f8fb23edc65876ae"
	I0103 19:01:35.885909   16974 cri.go:89] found id: ""
	I0103 19:01:35.885919   16974 logs.go:284] 1 containers: [445163bb913c1050a62ee894f92fd19a75126c6a709e3c17f8fb23edc65876ae]
	I0103 19:01:35.885970   16974 ssh_runner.go:195] Run: which crictl
	I0103 19:01:35.889848   16974 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 19:01:35.889909   16974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 19:01:35.984313   16974 cri.go:89] found id: "60d7d38bdc96a69acb171cbb213669b0abdfe53f172781ba46d62d9048ff8f9d"
	I0103 19:01:35.984337   16974 cri.go:89] found id: ""
	I0103 19:01:35.984348   16974 logs.go:284] 1 containers: [60d7d38bdc96a69acb171cbb213669b0abdfe53f172781ba46d62d9048ff8f9d]
	I0103 19:01:35.984412   16974 ssh_runner.go:195] Run: which crictl
	I0103 19:01:35.988523   16974 logs.go:123] Gathering logs for kindnet [60d7d38bdc96a69acb171cbb213669b0abdfe53f172781ba46d62d9048ff8f9d] ...
	I0103 19:01:35.988546   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60d7d38bdc96a69acb171cbb213669b0abdfe53f172781ba46d62d9048ff8f9d"
	I0103 19:01:36.080462   16974 logs.go:123] Gathering logs for CRI-O ...
	I0103 19:01:36.080497   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 19:01:36.094873   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:36.108660   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:36.155976   16974 logs.go:123] Gathering logs for container status ...
	I0103 19:01:36.156008   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 19:01:36.218056   16974 logs.go:123] Gathering logs for kube-apiserver [c1ee91e1b445a341d54714589225cd1ec7fe4ac052b86558121eb62b231c8edc] ...
	I0103 19:01:36.218093   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1ee91e1b445a341d54714589225cd1ec7fe4ac052b86558121eb62b231c8edc"
	I0103 19:01:36.326546   16974 logs.go:123] Gathering logs for coredns [1b05cc2c991cb313d0db341bac528cba927885f5305cdc0a92638745792ec34f] ...
	I0103 19:01:36.326577   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b05cc2c991cb313d0db341bac528cba927885f5305cdc0a92638745792ec34f"
	I0103 19:01:36.377564   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:36.418243   16974 logs.go:123] Gathering logs for kube-scheduler [d115c6946c19fcb2c40b6996249403555c3790ed810704d907bc9ba40a9b7fc5] ...
	I0103 19:01:36.418271   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d115c6946c19fcb2c40b6996249403555c3790ed810704d907bc9ba40a9b7fc5"
	I0103 19:01:36.515774   16974 logs.go:123] Gathering logs for kube-proxy [196b7c0ee29a475b82ab0ee6219bc80de5f49399692a75e8d8d3a0c5610910c7] ...
	I0103 19:01:36.515801   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 196b7c0ee29a475b82ab0ee6219bc80de5f49399692a75e8d8d3a0c5610910c7"
	I0103 19:01:36.589677   16974 logs.go:123] Gathering logs for kube-controller-manager [445163bb913c1050a62ee894f92fd19a75126c6a709e3c17f8fb23edc65876ae] ...
	I0103 19:01:36.589703   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 445163bb913c1050a62ee894f92fd19a75126c6a709e3c17f8fb23edc65876ae"
	I0103 19:01:36.594724   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:36.607243   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:36.660139   16974 logs.go:123] Gathering logs for kubelet ...
	I0103 19:01:36.660172   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 19:01:36.754892   16974 logs.go:123] Gathering logs for dmesg ...
	I0103 19:01:36.754932   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 19:01:36.767268   16974 logs.go:123] Gathering logs for describe nodes ...
	I0103 19:01:36.767304   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 19:01:36.875767   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:37.017057   16974 logs.go:123] Gathering logs for etcd [1d944c61f85a15e40e9eb15ae47027fd97f13460becca2d14667b29449c01d6a] ...
	I0103 19:01:37.017089   16974 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d944c61f85a15e40e9eb15ae47027fd97f13460becca2d14667b29449c01d6a"
	I0103 19:01:37.094919   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:37.107457   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:37.329639   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:37.594741   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:37.607124   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:37.830575   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:38.094577   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:38.108900   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:38.330456   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:38.593973   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:38.607655   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:38.830157   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:39.094047   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:39.107488   16974 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:01:39.329941   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:39.595768   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:39.686151   16974 system_pods.go:59] 19 kube-system pods found
	I0103 19:01:39.686240   16974 system_pods.go:61] "coredns-5dd5756b68-66s5s" [a5b46bed-e0b5-47a2-bc92-abd72268a20a] Running
	I0103 19:01:39.686260   16974 system_pods.go:61] "csi-hostpath-attacher-0" [cee530f9-adb3-4025-97ff-2e7ea62aa924] Running
	I0103 19:01:39.686283   16974 system_pods.go:61] "csi-hostpath-resizer-0" [ef13a2c7-dc7d-4ec8-8224-70098a79cd9d] Running
	I0103 19:01:39.686306   16974 system_pods.go:61] "csi-hostpathplugin-xq8dc" [5c268404-7dab-458f-96ce-61b775c33479] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0103 19:01:39.686315   16974 system_pods.go:61] "etcd-addons-173367" [e05be83a-ce76-426d-b347-7518d9f9252d] Running
	I0103 19:01:39.686329   16974 system_pods.go:61] "kindnet-t7hrd" [0c475c86-637a-433c-8577-1ee477329f15] Running
	I0103 19:01:39.686336   16974 system_pods.go:61] "kube-apiserver-addons-173367" [99548334-c565-44b5-95f5-d4efe55787a6] Running
	I0103 19:01:39.686344   16974 system_pods.go:61] "kube-controller-manager-addons-173367" [a04b6d2d-e281-4d21-883a-de8d9150a8b9] Running
	I0103 19:01:39.686351   16974 system_pods.go:61] "kube-ingress-dns-minikube" [1647fa3f-41cc-447b-a2a8-1e3c8adcf618] Running
	I0103 19:01:39.686358   16974 system_pods.go:61] "kube-proxy-z4qtr" [63cdcee8-8f32-4b1b-925f-0c34ba88ec45] Running
	I0103 19:01:39.686365   16974 system_pods.go:61] "kube-scheduler-addons-173367" [8a70048f-42b3-472d-bd12-59f8573ddf58] Running
	I0103 19:01:39.686378   16974 system_pods.go:61] "metrics-server-7c66d45ddc-2gr28" [1bcec89a-19d0-41ff-8305-2b37e1646fad] Running
	I0103 19:01:39.686385   16974 system_pods.go:61] "nvidia-device-plugin-daemonset-txfsz" [3f385b33-2c9b-4e02-af46-d4993e55fec5] Running
	I0103 19:01:39.686391   16974 system_pods.go:61] "registry-5mjnb" [0d2aeb06-5a71-450e-9d65-4d92104b10a9] Running
	I0103 19:01:39.686401   16974 system_pods.go:61] "registry-proxy-xvslp" [86efa070-d422-44ef-85d4-80914a2c61d4] Running
	I0103 19:01:39.686408   16974 system_pods.go:61] "snapshot-controller-58dbcc7b99-nzmsn" [ac0028be-b15e-4fb9-924e-a0b86d117dd9] Running
	I0103 19:01:39.686414   16974 system_pods.go:61] "snapshot-controller-58dbcc7b99-tctj7" [26b5fa92-2d8a-42ed-a397-7fa44e416e90] Running
	I0103 19:01:39.686425   16974 system_pods.go:61] "storage-provisioner" [1e744694-3c4d-4d33-a256-c0ab8425be48] Running
	I0103 19:01:39.686434   16974 system_pods.go:61] "tiller-deploy-7b677967b9-8jgcn" [8e17fd14-1eb4-4234-99cd-b179d7fae114] Running
	I0103 19:01:39.686442   16974 system_pods.go:74] duration metric: took 5.004573597s to wait for pod list to return data ...
	I0103 19:01:39.686454   16974 default_sa.go:34] waiting for default service account to be created ...
	I0103 19:01:39.688217   16974 kapi.go:107] duration metric: took 1m22.086973904s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0103 19:01:39.690060   16974 default_sa.go:45] found service account: "default"
	I0103 19:01:39.690079   16974 default_sa.go:55] duration metric: took 3.615053ms for default service account to be created ...
	I0103 19:01:39.690106   16974 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 19:01:39.700343   16974 system_pods.go:86] 19 kube-system pods found
	I0103 19:01:39.700371   16974 system_pods.go:89] "coredns-5dd5756b68-66s5s" [a5b46bed-e0b5-47a2-bc92-abd72268a20a] Running
	I0103 19:01:39.700379   16974 system_pods.go:89] "csi-hostpath-attacher-0" [cee530f9-adb3-4025-97ff-2e7ea62aa924] Running
	I0103 19:01:39.700386   16974 system_pods.go:89] "csi-hostpath-resizer-0" [ef13a2c7-dc7d-4ec8-8224-70098a79cd9d] Running
	I0103 19:01:39.700396   16974 system_pods.go:89] "csi-hostpathplugin-xq8dc" [5c268404-7dab-458f-96ce-61b775c33479] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0103 19:01:39.700404   16974 system_pods.go:89] "etcd-addons-173367" [e05be83a-ce76-426d-b347-7518d9f9252d] Running
	I0103 19:01:39.700522   16974 system_pods.go:89] "kindnet-t7hrd" [0c475c86-637a-433c-8577-1ee477329f15] Running
	I0103 19:01:39.700539   16974 system_pods.go:89] "kube-apiserver-addons-173367" [99548334-c565-44b5-95f5-d4efe55787a6] Running
	I0103 19:01:39.700547   16974 system_pods.go:89] "kube-controller-manager-addons-173367" [a04b6d2d-e281-4d21-883a-de8d9150a8b9] Running
	I0103 19:01:39.700557   16974 system_pods.go:89] "kube-ingress-dns-minikube" [1647fa3f-41cc-447b-a2a8-1e3c8adcf618] Running
	I0103 19:01:39.700567   16974 system_pods.go:89] "kube-proxy-z4qtr" [63cdcee8-8f32-4b1b-925f-0c34ba88ec45] Running
	I0103 19:01:39.700577   16974 system_pods.go:89] "kube-scheduler-addons-173367" [8a70048f-42b3-472d-bd12-59f8573ddf58] Running
	I0103 19:01:39.700584   16974 system_pods.go:89] "metrics-server-7c66d45ddc-2gr28" [1bcec89a-19d0-41ff-8305-2b37e1646fad] Running
	I0103 19:01:39.700594   16974 system_pods.go:89] "nvidia-device-plugin-daemonset-txfsz" [3f385b33-2c9b-4e02-af46-d4993e55fec5] Running
	I0103 19:01:39.700604   16974 system_pods.go:89] "registry-5mjnb" [0d2aeb06-5a71-450e-9d65-4d92104b10a9] Running
	I0103 19:01:39.700610   16974 system_pods.go:89] "registry-proxy-xvslp" [86efa070-d422-44ef-85d4-80914a2c61d4] Running
	I0103 19:01:39.700620   16974 system_pods.go:89] "snapshot-controller-58dbcc7b99-nzmsn" [ac0028be-b15e-4fb9-924e-a0b86d117dd9] Running
	I0103 19:01:39.700630   16974 system_pods.go:89] "snapshot-controller-58dbcc7b99-tctj7" [26b5fa92-2d8a-42ed-a397-7fa44e416e90] Running
	I0103 19:01:39.700636   16974 system_pods.go:89] "storage-provisioner" [1e744694-3c4d-4d33-a256-c0ab8425be48] Running
	I0103 19:01:39.700646   16974 system_pods.go:89] "tiller-deploy-7b677967b9-8jgcn" [8e17fd14-1eb4-4234-99cd-b179d7fae114] Running
	I0103 19:01:39.700660   16974 system_pods.go:126] duration metric: took 10.542026ms to wait for k8s-apps to be running ...
	I0103 19:01:39.700671   16974 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 19:01:39.700721   16974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:01:39.780212   16974 system_svc.go:56] duration metric: took 79.532316ms WaitForService to wait for kubelet.
	I0103 19:01:39.780294   16974 kubeadm.go:581] duration metric: took 1m28.392124585s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 19:01:39.780335   16974 node_conditions.go:102] verifying NodePressure condition ...
	I0103 19:01:39.784860   16974 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0103 19:01:39.784890   16974 node_conditions.go:123] node cpu capacity is 8
	I0103 19:01:39.784905   16974 node_conditions.go:105] duration metric: took 4.553351ms to run NodePressure ...
	I0103 19:01:39.784919   16974 start.go:228] waiting for startup goroutines ...
	I0103 19:01:39.879390   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:40.094456   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:40.329739   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:40.594390   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:40.829259   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:41.093857   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:41.376727   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:41.594874   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:41.829963   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:42.094747   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:42.332590   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:42.594245   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:42.829953   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:43.095309   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:43.330281   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:43.594399   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:43.829915   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:44.094301   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:01:44.329389   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:44.593994   16974 kapi.go:107] duration metric: took 1m26.005121449s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0103 19:01:44.829929   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:45.330275   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:45.830128   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:46.330216   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:46.832452   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:47.330110   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:47.830281   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:48.329998   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:48.830103   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:49.330276   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:49.830375   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:50.330303   16974 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:01:50.830176   16974 kapi.go:107] duration metric: took 1m31.50362998s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0103 19:01:50.832259   16974 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-173367 cluster.
	I0103 19:01:50.833804   16974 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0103 19:01:50.835344   16974 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0103 19:01:50.839111   16974 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner, helm-tiller, inspektor-gadget, metrics-server, yakd, default-storageclass, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0103 19:01:50.840578   16974 addons.go:508] enable addons completed in 1m39.962980947s: enabled=[cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner helm-tiller inspektor-gadget metrics-server yakd default-storageclass storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0103 19:01:50.840627   16974 start.go:233] waiting for cluster config update ...
	I0103 19:01:50.840656   16974 start.go:242] writing updated cluster config ...
	I0103 19:01:50.840976   16974 ssh_runner.go:195] Run: rm -f paused
	I0103 19:01:50.889565   16974 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0103 19:01:50.891722   16974 out.go:177] * Done! kubectl is now configured to use "addons-173367" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 03 19:04:33 addons-173367 crio[951]: time="2024-01-03 19:04:33.105586719Z" level=info msg="Removing container: fa58842926a3b5ad86df9eb2aaa7250893fdd985ca3decc91da987fde4b0ca23" id=14c2a530-5696-49f3-8484-5ccfc0534e47 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 03 19:04:33 addons-173367 crio[951]: time="2024-01-03 19:04:33.155816787Z" level=info msg="Removed container fa58842926a3b5ad86df9eb2aaa7250893fdd985ca3decc91da987fde4b0ca23: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=14c2a530-5696-49f3-8484-5ccfc0534e47 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 03 19:04:34 addons-173367 crio[951]: time="2024-01-03 19:04:34.574212953Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7" id=3bf7fcfc-5aee-48cb-972d-891237be3ed6 name=/runtime.v1.ImageService/PullImage
	Jan 03 19:04:34 addons-173367 crio[951]: time="2024-01-03 19:04:34.575076897Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=7011cd7c-8ef3-479b-b1b7-df676720d246 name=/runtime.v1.ImageService/ImageStatus
	Jan 03 19:04:34 addons-173367 crio[951]: time="2024-01-03 19:04:34.575996748Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=7011cd7c-8ef3-479b-b1b7-df676720d246 name=/runtime.v1.ImageService/ImageStatus
	Jan 03 19:04:34 addons-173367 crio[951]: time="2024-01-03 19:04:34.576798013Z" level=info msg="Creating container: default/hello-world-app-5d77478584-m7b9j/hello-world-app" id=9f98c64b-1ecf-4d66-9de6-199d741becb7 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 03 19:04:34 addons-173367 crio[951]: time="2024-01-03 19:04:34.576888219Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 03 19:04:34 addons-173367 crio[951]: time="2024-01-03 19:04:34.647219984Z" level=info msg="Created container 51de30407f30429cdee0c742fdb6dbb6fe0f99ca17abbb73dae4c046efb66569: default/hello-world-app-5d77478584-m7b9j/hello-world-app" id=9f98c64b-1ecf-4d66-9de6-199d741becb7 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 03 19:04:34 addons-173367 crio[951]: time="2024-01-03 19:04:34.647792044Z" level=info msg="Starting container: 51de30407f30429cdee0c742fdb6dbb6fe0f99ca17abbb73dae4c046efb66569" id=2f803860-d7aa-4bc2-b052-d8c3aa3efc64 name=/runtime.v1.RuntimeService/StartContainer
	Jan 03 19:04:34 addons-173367 crio[951]: time="2024-01-03 19:04:34.653417896Z" level=info msg="Started container" PID=10668 containerID=51de30407f30429cdee0c742fdb6dbb6fe0f99ca17abbb73dae4c046efb66569 description=default/hello-world-app-5d77478584-m7b9j/hello-world-app id=2f803860-d7aa-4bc2-b052-d8c3aa3efc64 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c474e3c47860b16d4e3cb49fc284fdcb855e83eb43e53d5b0c33eccadc941897
	Jan 03 19:04:34 addons-173367 crio[951]: time="2024-01-03 19:04:34.666062320Z" level=info msg="Stopping container: 76c3a11eb9bc0f2576cf18240589875f90c192bc62a90ea2e69477c839c1554c (timeout: 2s)" id=125a91cd-9f7b-4011-afe4-df168c22e183 name=/runtime.v1.RuntimeService/StopContainer
	Jan 03 19:04:36 addons-173367 crio[951]: time="2024-01-03 19:04:36.672333161Z" level=warning msg="Stopping container 76c3a11eb9bc0f2576cf18240589875f90c192bc62a90ea2e69477c839c1554c with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=125a91cd-9f7b-4011-afe4-df168c22e183 name=/runtime.v1.RuntimeService/StopContainer
	Jan 03 19:04:36 addons-173367 conmon[6344]: conmon 76c3a11eb9bc0f2576cf <ninfo>: container 6356 exited with status 137
	Jan 03 19:04:36 addons-173367 crio[951]: time="2024-01-03 19:04:36.802675147Z" level=info msg="Stopped container 76c3a11eb9bc0f2576cf18240589875f90c192bc62a90ea2e69477c839c1554c: ingress-nginx/ingress-nginx-controller-69cff4fd79-qnq68/controller" id=125a91cd-9f7b-4011-afe4-df168c22e183 name=/runtime.v1.RuntimeService/StopContainer
	Jan 03 19:04:36 addons-173367 crio[951]: time="2024-01-03 19:04:36.803189961Z" level=info msg="Stopping pod sandbox: 561da978de27cddb70d64c595e28b899ec73530fe5361490c9b3bd824942078a" id=50cdfdac-f8b6-44c3-97ff-85d215b4aeb8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 03 19:04:36 addons-173367 crio[951]: time="2024-01-03 19:04:36.806013305Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-HNL7FC3K2F5WIKB6 - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-22OSAPFHOPYZXLML - [0:0]\n-X KUBE-HP-22OSAPFHOPYZXLML\n-X KUBE-HP-HNL7FC3K2F5WIKB6\nCOMMIT\n"
	Jan 03 19:04:36 addons-173367 crio[951]: time="2024-01-03 19:04:36.807253793Z" level=info msg="Closing host port tcp:80"
	Jan 03 19:04:36 addons-173367 crio[951]: time="2024-01-03 19:04:36.807285462Z" level=info msg="Closing host port tcp:443"
	Jan 03 19:04:36 addons-173367 crio[951]: time="2024-01-03 19:04:36.808553547Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jan 03 19:04:36 addons-173367 crio[951]: time="2024-01-03 19:04:36.808568916Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jan 03 19:04:36 addons-173367 crio[951]: time="2024-01-03 19:04:36.808689202Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-69cff4fd79-qnq68 Namespace:ingress-nginx ID:561da978de27cddb70d64c595e28b899ec73530fe5361490c9b3bd824942078a UID:27ed73e2-f29e-49f0-972b-94618022bc73 NetNS:/var/run/netns/b1c022c1-bc1a-4c34-b1a7-35d2ce5d78d4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 03 19:04:36 addons-173367 crio[951]: time="2024-01-03 19:04:36.808801767Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-69cff4fd79-qnq68 from CNI network \"kindnet\" (type=ptp)"
	Jan 03 19:04:36 addons-173367 crio[951]: time="2024-01-03 19:04:36.847710465Z" level=info msg="Stopped pod sandbox: 561da978de27cddb70d64c595e28b899ec73530fe5361490c9b3bd824942078a" id=50cdfdac-f8b6-44c3-97ff-85d215b4aeb8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 03 19:04:37 addons-173367 crio[951]: time="2024-01-03 19:04:37.117080841Z" level=info msg="Removing container: 76c3a11eb9bc0f2576cf18240589875f90c192bc62a90ea2e69477c839c1554c" id=6531853e-f695-4e6f-ae72-2c66b63d2f7d name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 03 19:04:37 addons-173367 crio[951]: time="2024-01-03 19:04:37.130332938Z" level=info msg="Removed container 76c3a11eb9bc0f2576cf18240589875f90c192bc62a90ea2e69477c839c1554c: ingress-nginx/ingress-nginx-controller-69cff4fd79-qnq68/controller" id=6531853e-f695-4e6f-ae72-2c66b63d2f7d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	51de30407f304       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago       Running             hello-world-app           0                   c474e3c47860b       hello-world-app-5d77478584-m7b9j
	792d2cb7c83b7       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        2 minutes ago       Running             headlamp                  0                   a102ccdeb63c1       headlamp-7ddfbb94ff-mrwcb
	5e31ea3cbc9e8       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                              2 minutes ago       Running             nginx                     0                   bdf388af5264b       nginx
	1c3b4654e108c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   f9328448c2699       gcp-auth-d4c87556c-2bzzg
	e758971f52e96       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   0c04b8d5417c6       yakd-dashboard-9947fc6bf-4frgl
	aba986934b474       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              patch                     0                   96e62ff8689cd       ingress-nginx-admission-patch-ncxzt
	30b84a65985f6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   501bc68ff87ff       ingress-nginx-admission-create-pjpgc
	e5a8a76b79d4a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   a914da7dccf11       storage-provisioner
	1b05cc2c991cb       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   910a637d4a080       coredns-5dd5756b68-66s5s
	196b7c0ee29a4       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   f39f757c168b6       kube-proxy-z4qtr
	60d7d38bdc96a       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                             4 minutes ago       Running             kindnet-cni               0                   b16ef6334631a       kindnet-t7hrd
	1d944c61f85a1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   39d88fb941e39       etcd-addons-173367
	445163bb913c1       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   6d97940e97ab4       kube-controller-manager-addons-173367
	c1ee91e1b445a       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   5475e2b17efe9       kube-apiserver-addons-173367
	d115c6946c19f       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   9e57002f1df1d       kube-scheduler-addons-173367
	
	
	==> coredns [1b05cc2c991cb313d0db341bac528cba927885f5305cdc0a92638745792ec34f] <==
	[INFO] 10.244.0.18:37028 - 63500 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000067628s
	[INFO] 10.244.0.18:54371 - 59887 "AAAA IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.002914833s
	[INFO] 10.244.0.18:54371 - 59891 "A IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.00379166s
	[INFO] 10.244.0.18:58074 - 52573 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003824863s
	[INFO] 10.244.0.18:58074 - 34911 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005391143s
	[INFO] 10.244.0.18:59705 - 52597 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003992416s
	[INFO] 10.244.0.18:59705 - 58230 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005002476s
	[INFO] 10.244.0.18:41666 - 8731 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000074086s
	[INFO] 10.244.0.18:41666 - 11550 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000100213s
	[INFO] 10.244.0.21:56483 - 45909 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000199664s
	[INFO] 10.244.0.21:37440 - 61643 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000312215s
	[INFO] 10.244.0.21:45017 - 34879 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000106297s
	[INFO] 10.244.0.21:45060 - 41261 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00012011s
	[INFO] 10.244.0.21:46777 - 2572 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000102504s
	[INFO] 10.244.0.21:45811 - 46368 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000139156s
	[INFO] 10.244.0.21:56575 - 44053 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.006548668s
	[INFO] 10.244.0.21:50238 - 3225 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.011914167s
	[INFO] 10.244.0.21:35019 - 24904 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005817787s
	[INFO] 10.244.0.21:60969 - 43371 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005918849s
	[INFO] 10.244.0.21:35904 - 30590 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007604641s
	[INFO] 10.244.0.21:34318 - 46373 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007664263s
	[INFO] 10.244.0.21:40035 - 20160 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 306 0.000604993s
	[INFO] 10.244.0.21:51926 - 56393 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00064915s
	[INFO] 10.244.0.24:35093 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00022738s
	[INFO] 10.244.0.24:34557 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000156083s
	
	
	==> describe nodes <==
	Name:               addons-173367
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-173367
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=addons-173367
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_03T18_59_58_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-173367
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 18:59:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-173367
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jan 2024 19:04:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 19:03:02 +0000   Wed, 03 Jan 2024 18:59:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 19:03:02 +0000   Wed, 03 Jan 2024 18:59:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 19:03:02 +0000   Wed, 03 Jan 2024 18:59:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 19:03:02 +0000   Wed, 03 Jan 2024 19:00:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-173367
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 40035ea2a7aa42279590d4ae9dc1fe0b
	  System UUID:                6a05abb3-cfa8-42e0-ad79-5f34ee2a37c3
	  Boot ID:                    b5a86fc9-be37-4e1f-bbe9-b1739322b77c
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-m7b9j         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  gcp-auth                    gcp-auth-d4c87556c-2bzzg                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  headlamp                    headlamp-7ddfbb94ff-mrwcb                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 coredns-5dd5756b68-66s5s                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m31s
	  kube-system                 etcd-addons-173367                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m43s
	  kube-system                 kindnet-t7hrd                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m31s
	  kube-system                 kube-apiserver-addons-173367             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 kube-controller-manager-addons-173367    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-proxy-z4qtr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 kube-scheduler-addons-173367             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-4frgl           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       256Mi (0%!)(MISSING)     4m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             348Mi (1%!)(MISSING)  476Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m26s  kube-proxy       
	  Normal  Starting                 4m44s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m43s  kubelet          Node addons-173367 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m43s  kubelet          Node addons-173367 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m43s  kubelet          Node addons-173367 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m32s  node-controller  Node addons-173367 event: Registered Node addons-173367 in Controller
	  Normal  NodeReady                3m58s  kubelet          Node addons-173367 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.008474] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003277] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000768] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000731] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000867] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000711] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000657] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +9.021069] kauditd_printk_skb: 36 callbacks suppressed
	[Jan 3 19:02] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 0c 72 52 36 74 1a f1 34 e3 7d 84 08 00
	[  +1.027854] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 32 0c 72 52 36 74 1a f1 34 e3 7d 84 08 00
	[  +2.019864] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 0c 72 52 36 74 1a f1 34 e3 7d 84 08 00
	[  +4.123688] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 0c 72 52 36 74 1a f1 34 e3 7d 84 08 00
	[  +8.191408] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 0c 72 52 36 74 1a f1 34 e3 7d 84 08 00
	[ +16.126732] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 0c 72 52 36 74 1a f1 34 e3 7d 84 08 00
	[Jan 3 19:03] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 0c 72 52 36 74 1a f1 34 e3 7d 84 08 00
	
	
	==> etcd [1d944c61f85a15e40e9eb15ae47027fd97f13460becca2d14667b29449c01d6a] <==
	{"level":"info","ts":"2024-01-03T19:00:16.884535Z","caller":"traceutil/trace.go:171","msg":"trace[1961217808] transaction","detail":"{read_only:false; response_revision:536; number_of_response:1; }","duration":"189.833232ms","start":"2024-01-03T19:00:16.694671Z","end":"2024-01-03T19:00:16.884504Z","steps":["trace[1961217808] 'process raft request'  (duration: 86.851269ms)","trace[1961217808] 'compare'  (duration: 102.242876ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-03T19:00:16.884656Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.370315ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/yakd-dashboard/yakd-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-03T19:00:16.884694Z","caller":"traceutil/trace.go:171","msg":"trace[1720078167] range","detail":"{range_begin:/registry/deployments/yakd-dashboard/yakd-dashboard; range_end:; response_count:0; response_revision:541; }","duration":"186.41778ms","start":"2024-01-03T19:00:16.698269Z","end":"2024-01-03T19:00:16.884687Z","steps":["trace[1720078167] 'agreement among raft nodes before linearized reading'  (duration: 186.328248ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T19:00:16.884555Z","caller":"traceutil/trace.go:171","msg":"trace[1654752817] transaction","detail":"{read_only:false; response_revision:538; number_of_response:1; }","duration":"104.418532ms","start":"2024-01-03T19:00:16.780124Z","end":"2024-01-03T19:00:16.884543Z","steps":["trace[1654752817] 'process raft request'  (duration: 104.063508ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T19:00:16.884876Z","caller":"traceutil/trace.go:171","msg":"trace[737658536] transaction","detail":"{read_only:false; response_revision:539; number_of_response:1; }","duration":"103.822254ms","start":"2024-01-03T19:00:16.781045Z","end":"2024-01-03T19:00:16.884867Z","steps":["trace[737658536] 'process raft request'  (duration: 103.177994ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T19:00:16.884584Z","caller":"traceutil/trace.go:171","msg":"trace[1948540413] linearizableReadLoop","detail":"{readStateIndex:554; appliedIndex:552; }","duration":"186.281774ms","start":"2024-01-03T19:00:16.698295Z","end":"2024-01-03T19:00:16.884577Z","steps":["trace[1948540413] 'read index received'  (duration: 82.119784ms)","trace[1948540413] 'applied index is now lower than readState.Index'  (duration: 104.161182ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-03T19:00:16.885318Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.799972ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/snapshot-controller\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-03T19:00:16.885356Z","caller":"traceutil/trace.go:171","msg":"trace[2002240962] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/snapshot-controller; range_end:; response_count:0; response_revision:541; }","duration":"186.862246ms","start":"2024-01-03T19:00:16.698484Z","end":"2024-01-03T19:00:16.885346Z","steps":["trace[2002240962] 'agreement among raft nodes before linearized reading'  (duration: 186.780238ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T19:01:41.575649Z","caller":"traceutil/trace.go:171","msg":"trace[1651597094] transaction","detail":"{read_only:false; response_revision:1170; number_of_response:1; }","duration":"165.385614ms","start":"2024-01-03T19:01:41.410232Z","end":"2024-01-03T19:01:41.575617Z","steps":["trace[1651597094] 'process raft request'  (duration: 139.065444ms)","trace[1651597094] 'compare'  (duration: 25.940199ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-03T19:01:41.575737Z","caller":"traceutil/trace.go:171","msg":"trace[930066981] transaction","detail":"{read_only:false; response_revision:1171; number_of_response:1; }","duration":"163.628616ms","start":"2024-01-03T19:01:41.412091Z","end":"2024-01-03T19:01:41.57572Z","steps":["trace[930066981] 'process raft request'  (duration: 163.448054ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T19:01:41.576115Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.95148ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" limit:500 ","response":"range_response_count:213 size:168674"}
	{"level":"info","ts":"2024-01-03T19:01:41.576165Z","caller":"traceutil/trace.go:171","msg":"trace[208625316] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:213; response_revision:1169; }","duration":"166.040428ms","start":"2024-01-03T19:01:41.410115Z","end":"2024-01-03T19:01:41.576155Z","steps":["trace[208625316] 'range keys from in-memory index tree'  (duration: 165.155701ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T19:01:57.522584Z","caller":"traceutil/trace.go:171","msg":"trace[261489046] transaction","detail":"{read_only:false; response_revision:1249; number_of_response:1; }","duration":"131.205445ms","start":"2024-01-03T19:01:57.391363Z","end":"2024-01-03T19:01:57.522569Z","steps":["trace[261489046] 'process raft request'  (duration: 131.108031ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T19:01:57.522564Z","caller":"traceutil/trace.go:171","msg":"trace[1318876879] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1248; }","duration":"131.480226ms","start":"2024-01-03T19:01:57.391058Z","end":"2024-01-03T19:01:57.522538Z","steps":["trace[1318876879] 'process raft request'  (duration: 54.138968ms)","trace[1318876879] 'compare'  (duration: 77.179286ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-03T19:01:57.753958Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.403757ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-03T19:01:57.754014Z","caller":"traceutil/trace.go:171","msg":"trace[600963297] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:0; response_revision:1251; }","duration":"119.478482ms","start":"2024-01-03T19:01:57.634526Z","end":"2024-01-03T19:01:57.754005Z","steps":["trace[600963297] 'range keys from in-memory index tree'  (duration: 119.325128ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T19:01:57.920524Z","caller":"traceutil/trace.go:171","msg":"trace[1383943769] transaction","detail":"{read_only:false; response_revision:1253; number_of_response:1; }","duration":"110.056864ms","start":"2024-01-03T19:01:57.810444Z","end":"2024-01-03T19:01:57.920501Z","steps":["trace[1383943769] 'process raft request'  (duration: 53.35791ms)","trace[1383943769] 'compare'  (duration: 56.521376ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-03T19:02:08.684076Z","caller":"traceutil/trace.go:171","msg":"trace[1589747099] transaction","detail":"{read_only:false; response_revision:1376; number_of_response:1; }","duration":"166.203467ms","start":"2024-01-03T19:02:08.51785Z","end":"2024-01-03T19:02:08.684053Z","steps":["trace[1589747099] 'process raft request'  (duration: 165.559377ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T19:02:08.901143Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.223688ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/cloud-spanner-emulator-64c8c85f65-ttqb6\" ","response":"range_response_count:1 size:3512"}
	{"level":"info","ts":"2024-01-03T19:02:08.9013Z","caller":"traceutil/trace.go:171","msg":"trace[1110415194] range","detail":"{range_begin:/registry/pods/default/cloud-spanner-emulator-64c8c85f65-ttqb6; range_end:; response_count:1; response_revision:1377; }","duration":"120.392138ms","start":"2024-01-03T19:02:08.780891Z","end":"2024-01-03T19:02:08.901283Z","steps":["trace[1110415194] 'range keys from in-memory index tree'  (duration: 120.127149ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T19:02:09.045295Z","caller":"traceutil/trace.go:171","msg":"trace[265158265] linearizableReadLoop","detail":"{readStateIndex:1427; appliedIndex:1426; }","duration":"139.552509ms","start":"2024-01-03T19:02:08.905724Z","end":"2024-01-03T19:02:09.045277Z","steps":["trace[265158265] 'read index received'  (duration: 139.373317ms)","trace[265158265] 'applied index is now lower than readState.Index'  (duration: 178.543µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-03T19:02:09.045332Z","caller":"traceutil/trace.go:171","msg":"trace[1581327272] transaction","detail":"{read_only:false; response_revision:1378; number_of_response:1; }","duration":"141.572248ms","start":"2024-01-03T19:02:08.903736Z","end":"2024-01-03T19:02:09.045308Z","steps":["trace[1581327272] 'process raft request'  (duration: 141.401738ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T19:02:09.045429Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.707433ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-01-03T19:02:09.045518Z","caller":"traceutil/trace.go:171","msg":"trace[1329710353] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1378; }","duration":"139.810217ms","start":"2024-01-03T19:02:08.905696Z","end":"2024-01-03T19:02:09.045507Z","steps":["trace[1329710353] 'agreement among raft nodes before linearized reading'  (duration: 139.674166ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T19:02:50.490389Z","caller":"traceutil/trace.go:171","msg":"trace[663118156] transaction","detail":"{read_only:false; response_revision:1744; number_of_response:1; }","duration":"180.378717ms","start":"2024-01-03T19:02:50.309983Z","end":"2024-01-03T19:02:50.490362Z","steps":["trace[663118156] 'process raft request'  (duration: 118.621517ms)","trace[663118156] 'compare'  (duration: 61.617467ms)"],"step_count":2}
	
	
	==> gcp-auth [1c3b4654e108c8c1b22de31ce71eb41fdb7ebcb8ee449c99f7c4b2141c93b357] <==
	2024/01/03 19:01:50 GCP Auth Webhook started!
	2024/01/03 19:01:55 Ready to marshal response ...
	2024/01/03 19:01:55 Ready to write response ...
	2024/01/03 19:01:57 Ready to marshal response ...
	2024/01/03 19:01:57 Ready to write response ...
	2024/01/03 19:01:57 Ready to marshal response ...
	2024/01/03 19:01:57 Ready to write response ...
	2024/01/03 19:02:01 Ready to marshal response ...
	2024/01/03 19:02:01 Ready to write response ...
	2024/01/03 19:02:08 Ready to marshal response ...
	2024/01/03 19:02:08 Ready to write response ...
	2024/01/03 19:02:09 Ready to marshal response ...
	2024/01/03 19:02:09 Ready to write response ...
	2024/01/03 19:02:09 Ready to marshal response ...
	2024/01/03 19:02:09 Ready to write response ...
	2024/01/03 19:02:09 Ready to marshal response ...
	2024/01/03 19:02:09 Ready to write response ...
	2024/01/03 19:02:12 Ready to marshal response ...
	2024/01/03 19:02:12 Ready to write response ...
	2024/01/03 19:02:23 Ready to marshal response ...
	2024/01/03 19:02:23 Ready to write response ...
	2024/01/03 19:02:40 Ready to marshal response ...
	2024/01/03 19:02:40 Ready to write response ...
	2024/01/03 19:04:31 Ready to marshal response ...
	2024/01/03 19:04:31 Ready to write response ...
	
	
	==> kernel <==
	 19:04:41 up 47 min,  0 users,  load average: 0.47, 0.54, 0.29
	Linux addons-173367 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [60d7d38bdc96a69acb171cbb213669b0abdfe53f172781ba46d62d9048ff8f9d] <==
	I0103 19:02:32.837407       1 main.go:227] handling current node
	I0103 19:02:42.841480       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:02:42.841504       1 main.go:227] handling current node
	I0103 19:02:52.853777       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:02:52.853797       1 main.go:227] handling current node
	I0103 19:03:02.858367       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:03:02.858391       1 main.go:227] handling current node
	I0103 19:03:12.861888       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:03:12.861910       1 main.go:227] handling current node
	I0103 19:03:22.869977       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:03:22.870003       1 main.go:227] handling current node
	I0103 19:03:32.881194       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:03:32.881215       1 main.go:227] handling current node
	I0103 19:03:42.884310       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:03:42.884333       1 main.go:227] handling current node
	I0103 19:03:52.896370       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:03:52.896393       1 main.go:227] handling current node
	I0103 19:04:02.899828       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:04:02.899849       1 main.go:227] handling current node
	I0103 19:04:12.912103       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:04:12.912123       1 main.go:227] handling current node
	I0103 19:04:22.921845       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:04:22.921870       1 main.go:227] handling current node
	I0103 19:04:32.925035       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:04:32.925055       1 main.go:227] handling current node
	
	
	==> kube-apiserver [c1ee91e1b445a341d54714589225cd1ec7fe4ac052b86558121eb62b231c8edc] <==
	I0103 19:02:35.287588       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0103 19:02:37.231909       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0103 19:02:37.240709       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0103 19:02:38.250783       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0103 19:02:56.196402       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0103 19:02:56.196453       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0103 19:02:56.202352       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0103 19:02:56.202418       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0103 19:02:56.208750       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0103 19:02:56.208803       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0103 19:02:56.209750       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0103 19:02:56.209786       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0103 19:02:56.219447       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0103 19:02:56.219585       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0103 19:02:56.223330       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0103 19:02:56.223432       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0103 19:02:56.231299       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0103 19:02:56.231413       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0103 19:02:56.233720       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0103 19:02:56.233804       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0103 19:02:57.210290       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0103 19:02:57.231926       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0103 19:02:57.288625       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0103 19:03:08.339423       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0103 19:04:31.842021       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.120.65"}
	
	
	==> kube-controller-manager [445163bb913c1050a62ee894f92fd19a75126c6a709e3c17f8fb23edc65876ae] <==
	E0103 19:03:35.869860       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0103 19:03:59.124527       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0103 19:03:59.124558       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0103 19:04:04.854581       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0103 19:04:04.854609       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0103 19:04:06.002302       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0103 19:04:06.002329       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0103 19:04:24.314127       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0103 19:04:24.314177       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0103 19:04:30.810192       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0103 19:04:30.810224       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0103 19:04:31.680545       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0103 19:04:31.689181       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-m7b9j"
	I0103 19:04:31.693686       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="13.370074ms"
	I0103 19:04:31.699065       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="5.254666ms"
	I0103 19:04:31.699139       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.512µs"
	I0103 19:04:31.699212       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="35.973µs"
	I0103 19:04:31.705846       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="54.809µs"
	I0103 19:04:33.655112       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0103 19:04:33.655328       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="5.78µs"
	I0103 19:04:33.661181       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0103 19:04:35.126584       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="6.489401ms"
	I0103 19:04:35.126664       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="35.711µs"
	W0103 19:04:37.052071       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0103 19:04:37.052099       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [196b7c0ee29a475b82ab0ee6219bc80de5f49399692a75e8d8d3a0c5610910c7] <==
	I0103 19:00:13.396148       1 server_others.go:69] "Using iptables proxy"
	I0103 19:00:13.786929       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0103 19:00:15.188091       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0103 19:00:15.285604       1 server_others.go:152] "Using iptables Proxier"
	I0103 19:00:15.285714       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0103 19:00:15.285749       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0103 19:00:15.285805       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0103 19:00:15.286093       1 server.go:846] "Version info" version="v1.28.4"
	I0103 19:00:15.286370       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 19:00:15.287273       1 config.go:188] "Starting service config controller"
	I0103 19:00:15.287342       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0103 19:00:15.287394       1 config.go:97] "Starting endpoint slice config controller"
	I0103 19:00:15.287421       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0103 19:00:15.288109       1 config.go:315] "Starting node config controller"
	I0103 19:00:15.288176       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0103 19:00:15.477789       1 shared_informer.go:318] Caches are synced for node config
	I0103 19:00:15.477915       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0103 19:00:15.477994       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [d115c6946c19fcb2c40b6996249403555c3790ed810704d907bc9ba40a9b7fc5] <==
	W0103 18:59:54.976975       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0103 18:59:54.977938       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0103 18:59:54.977545       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0103 18:59:54.977963       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0103 18:59:54.977552       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0103 18:59:54.977979       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0103 18:59:54.977612       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0103 18:59:54.977998       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0103 18:59:54.977751       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0103 18:59:54.978019       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0103 18:59:55.789669       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0103 18:59:55.789695       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0103 18:59:55.854249       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0103 18:59:55.854283       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0103 18:59:55.918095       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0103 18:59:55.918120       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0103 18:59:55.972412       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0103 18:59:55.972445       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0103 18:59:56.042807       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0103 18:59:56.042843       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0103 18:59:56.054060       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0103 18:59:56.054085       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0103 18:59:56.060461       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0103 18:59:56.060500       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0103 18:59:58.699682       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 03 19:04:31 addons-173367 kubelet[1554]: I0103 19:04:31.881743    1554 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6gnd\" (UniqueName: \"kubernetes.io/projected/e7ce0280-b202-45bf-aa03-e2a35a495104-kube-api-access-r6gnd\") pod \"hello-world-app-5d77478584-m7b9j\" (UID: \"e7ce0280-b202-45bf-aa03-e2a35a495104\") " pod="default/hello-world-app-5d77478584-m7b9j"
	Jan 03 19:04:31 addons-173367 kubelet[1554]: I0103 19:04:31.881831    1554 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e7ce0280-b202-45bf-aa03-e2a35a495104-gcp-creds\") pod \"hello-world-app-5d77478584-m7b9j\" (UID: \"e7ce0280-b202-45bf-aa03-e2a35a495104\") " pod="default/hello-world-app-5d77478584-m7b9j"
	Jan 03 19:04:32 addons-173367 kubelet[1554]: W0103 19:04:32.102787    1554 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/761357a2d6c3b21548e437efd57d16516164b8e567be2bf4de1f17c07fe8fcc0/crio-c474e3c47860b16d4e3cb49fc284fdcb855e83eb43e53d5b0c33eccadc941897 WatchSource:0}: Error finding container c474e3c47860b16d4e3cb49fc284fdcb855e83eb43e53d5b0c33eccadc941897: Status 404 returned error can't find the container with id c474e3c47860b16d4e3cb49fc284fdcb855e83eb43e53d5b0c33eccadc941897
	Jan 03 19:04:32 addons-173367 kubelet[1554]: I0103 19:04:32.788441    1554 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67g6g\" (UniqueName: \"kubernetes.io/projected/1647fa3f-41cc-447b-a2a8-1e3c8adcf618-kube-api-access-67g6g\") pod \"1647fa3f-41cc-447b-a2a8-1e3c8adcf618\" (UID: \"1647fa3f-41cc-447b-a2a8-1e3c8adcf618\") "
	Jan 03 19:04:32 addons-173367 kubelet[1554]: I0103 19:04:32.790189    1554 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1647fa3f-41cc-447b-a2a8-1e3c8adcf618-kube-api-access-67g6g" (OuterVolumeSpecName: "kube-api-access-67g6g") pod "1647fa3f-41cc-447b-a2a8-1e3c8adcf618" (UID: "1647fa3f-41cc-447b-a2a8-1e3c8adcf618"). InnerVolumeSpecName "kube-api-access-67g6g". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 03 19:04:32 addons-173367 kubelet[1554]: I0103 19:04:32.889602    1554 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-67g6g\" (UniqueName: \"kubernetes.io/projected/1647fa3f-41cc-447b-a2a8-1e3c8adcf618-kube-api-access-67g6g\") on node \"addons-173367\" DevicePath \"\""
	Jan 03 19:04:33 addons-173367 kubelet[1554]: I0103 19:04:33.104578    1554 scope.go:117] "RemoveContainer" containerID="fa58842926a3b5ad86df9eb2aaa7250893fdd985ca3decc91da987fde4b0ca23"
	Jan 03 19:04:33 addons-173367 kubelet[1554]: I0103 19:04:33.156138    1554 scope.go:117] "RemoveContainer" containerID="fa58842926a3b5ad86df9eb2aaa7250893fdd985ca3decc91da987fde4b0ca23"
	Jan 03 19:04:33 addons-173367 kubelet[1554]: E0103 19:04:33.156646    1554 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa58842926a3b5ad86df9eb2aaa7250893fdd985ca3decc91da987fde4b0ca23\": container with ID starting with fa58842926a3b5ad86df9eb2aaa7250893fdd985ca3decc91da987fde4b0ca23 not found: ID does not exist" containerID="fa58842926a3b5ad86df9eb2aaa7250893fdd985ca3decc91da987fde4b0ca23"
	Jan 03 19:04:33 addons-173367 kubelet[1554]: I0103 19:04:33.156704    1554 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa58842926a3b5ad86df9eb2aaa7250893fdd985ca3decc91da987fde4b0ca23"} err="failed to get container status \"fa58842926a3b5ad86df9eb2aaa7250893fdd985ca3decc91da987fde4b0ca23\": rpc error: code = NotFound desc = could not find container \"fa58842926a3b5ad86df9eb2aaa7250893fdd985ca3decc91da987fde4b0ca23\": container with ID starting with fa58842926a3b5ad86df9eb2aaa7250893fdd985ca3decc91da987fde4b0ca23 not found: ID does not exist"
	Jan 03 19:04:33 addons-173367 kubelet[1554]: I0103 19:04:33.978366    1554 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1647fa3f-41cc-447b-a2a8-1e3c8adcf618" path="/var/lib/kubelet/pods/1647fa3f-41cc-447b-a2a8-1e3c8adcf618/volumes"
	Jan 03 19:04:33 addons-173367 kubelet[1554]: I0103 19:04:33.978866    1554 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="aadcf40f-4e1d-47f7-bcbc-96091a91103c" path="/var/lib/kubelet/pods/aadcf40f-4e1d-47f7-bcbc-96091a91103c/volumes"
	Jan 03 19:04:33 addons-173367 kubelet[1554]: I0103 19:04:33.979285    1554 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b67d4f9b-c74c-44d5-b657-971daf74364d" path="/var/lib/kubelet/pods/b67d4f9b-c74c-44d5-b657-971daf74364d/volumes"
	Jan 03 19:04:35 addons-173367 kubelet[1554]: I0103 19:04:35.120354    1554 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-m7b9j" podStartSLOduration=1.652206302 podCreationTimestamp="2024-01-03 19:04:31 +0000 UTC" firstStartedPulling="2024-01-03 19:04:32.106419013 +0000 UTC m=+274.264811432" lastFinishedPulling="2024-01-03 19:04:34.574512807 +0000 UTC m=+276.732905239" observedRunningTime="2024-01-03 19:04:35.119940519 +0000 UTC m=+277.278332952" watchObservedRunningTime="2024-01-03 19:04:35.120300109 +0000 UTC m=+277.278692542"
	Jan 03 19:04:37 addons-173367 kubelet[1554]: I0103 19:04:37.018175    1554 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzhf9\" (UniqueName: \"kubernetes.io/projected/27ed73e2-f29e-49f0-972b-94618022bc73-kube-api-access-dzhf9\") pod \"27ed73e2-f29e-49f0-972b-94618022bc73\" (UID: \"27ed73e2-f29e-49f0-972b-94618022bc73\") "
	Jan 03 19:04:37 addons-173367 kubelet[1554]: I0103 19:04:37.018238    1554 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/27ed73e2-f29e-49f0-972b-94618022bc73-webhook-cert\") pod \"27ed73e2-f29e-49f0-972b-94618022bc73\" (UID: \"27ed73e2-f29e-49f0-972b-94618022bc73\") "
	Jan 03 19:04:37 addons-173367 kubelet[1554]: I0103 19:04:37.020052    1554 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27ed73e2-f29e-49f0-972b-94618022bc73-kube-api-access-dzhf9" (OuterVolumeSpecName: "kube-api-access-dzhf9") pod "27ed73e2-f29e-49f0-972b-94618022bc73" (UID: "27ed73e2-f29e-49f0-972b-94618022bc73"). InnerVolumeSpecName "kube-api-access-dzhf9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 03 19:04:37 addons-173367 kubelet[1554]: I0103 19:04:37.020207    1554 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27ed73e2-f29e-49f0-972b-94618022bc73-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "27ed73e2-f29e-49f0-972b-94618022bc73" (UID: "27ed73e2-f29e-49f0-972b-94618022bc73"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 03 19:04:37 addons-173367 kubelet[1554]: I0103 19:04:37.116107    1554 scope.go:117] "RemoveContainer" containerID="76c3a11eb9bc0f2576cf18240589875f90c192bc62a90ea2e69477c839c1554c"
	Jan 03 19:04:37 addons-173367 kubelet[1554]: I0103 19:04:37.119445    1554 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/27ed73e2-f29e-49f0-972b-94618022bc73-webhook-cert\") on node \"addons-173367\" DevicePath \"\""
	Jan 03 19:04:37 addons-173367 kubelet[1554]: I0103 19:04:37.119479    1554 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dzhf9\" (UniqueName: \"kubernetes.io/projected/27ed73e2-f29e-49f0-972b-94618022bc73-kube-api-access-dzhf9\") on node \"addons-173367\" DevicePath \"\""
	Jan 03 19:04:37 addons-173367 kubelet[1554]: I0103 19:04:37.130578    1554 scope.go:117] "RemoveContainer" containerID="76c3a11eb9bc0f2576cf18240589875f90c192bc62a90ea2e69477c839c1554c"
	Jan 03 19:04:37 addons-173367 kubelet[1554]: E0103 19:04:37.130931    1554 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76c3a11eb9bc0f2576cf18240589875f90c192bc62a90ea2e69477c839c1554c\": container with ID starting with 76c3a11eb9bc0f2576cf18240589875f90c192bc62a90ea2e69477c839c1554c not found: ID does not exist" containerID="76c3a11eb9bc0f2576cf18240589875f90c192bc62a90ea2e69477c839c1554c"
	Jan 03 19:04:37 addons-173367 kubelet[1554]: I0103 19:04:37.131002    1554 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76c3a11eb9bc0f2576cf18240589875f90c192bc62a90ea2e69477c839c1554c"} err="failed to get container status \"76c3a11eb9bc0f2576cf18240589875f90c192bc62a90ea2e69477c839c1554c\": rpc error: code = NotFound desc = could not find container \"76c3a11eb9bc0f2576cf18240589875f90c192bc62a90ea2e69477c839c1554c\": container with ID starting with 76c3a11eb9bc0f2576cf18240589875f90c192bc62a90ea2e69477c839c1554c not found: ID does not exist"
	Jan 03 19:04:37 addons-173367 kubelet[1554]: I0103 19:04:37.977463    1554 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="27ed73e2-f29e-49f0-972b-94618022bc73" path="/var/lib/kubelet/pods/27ed73e2-f29e-49f0-972b-94618022bc73/volumes"
	
	
	==> storage-provisioner [e5a8a76b79d4ac85298fe6efaf760aae151001bbfdcd908cbb439c4168ea1f30] <==
	I0103 19:00:44.206183       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0103 19:00:44.284055       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0103 19:00:44.284103       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0103 19:00:44.294024       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0103 19:00:44.294256       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-173367_486084d5-c2f5-471e-baf8-223e341adab3!
	I0103 19:00:44.294556       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e9faa242-2536-46b9-b49f-025f3b8c6e49", APIVersion:"v1", ResourceVersion:"892", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-173367_486084d5-c2f5-471e-baf8-223e341adab3 became leader
	I0103 19:00:44.395000       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-173367_486084d5-c2f5-471e-baf8-223e341adab3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-173367 -n addons-173367
helpers_test.go:261: (dbg) Run:  kubectl --context addons-173367 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.92s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (187.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-547465 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-547465 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (18.344979146s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-547465 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-547465 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4d283201-3aee-4ba6-af45-a3318a7f4bb1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4d283201-3aee-4ba6-af45-a3318a7f4bb1] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 12.003237856s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-547465 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0103 19:11:50.907980   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
E0103 19:12:18.593698   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
E0103 19:13:12.544630   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/functional-436252/client.crt: no such file or directory
E0103 19:13:12.549996   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/functional-436252/client.crt: no such file or directory
E0103 19:13:12.560494   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/functional-436252/client.crt: no such file or directory
E0103 19:13:12.580775   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/functional-436252/client.crt: no such file or directory
E0103 19:13:12.621064   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/functional-436252/client.crt: no such file or directory
E0103 19:13:12.701414   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/functional-436252/client.crt: no such file or directory
E0103 19:13:12.861805   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/functional-436252/client.crt: no such file or directory
E0103 19:13:13.182386   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/functional-436252/client.crt: no such file or directory
E0103 19:13:13.823305   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/functional-436252/client.crt: no such file or directory
E0103 19:13:15.103761   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/functional-436252/client.crt: no such file or directory
E0103 19:13:17.665639   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/functional-436252/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-547465 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.525512044s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-547465 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-547465 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E0103 19:13:22.786786   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/functional-436252/client.crt: no such file or directory
E0103 19:13:33.027063   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/functional-436252/client.crt: no such file or directory
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.013687267s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-547465 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-547465 addons disable ingress-dns --alsologtostderr -v=1: (2.625589678s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-547465 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-547465 addons disable ingress --alsologtostderr -v=1: (7.394879127s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-547465
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-547465:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b8ff93692e8816d442112391a021bdcb874187fdac7a10e6facf54db1f78bb35",
	        "Created": "2024-01-03T19:09:34.622167552Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 56979,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-03T19:09:34.905294628Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b531f61d9bfacb74e25e3fb20a2533b78fa4bf98ca9061755006074ff8c2c789",
	        "ResolvConfPath": "/var/lib/docker/containers/b8ff93692e8816d442112391a021bdcb874187fdac7a10e6facf54db1f78bb35/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b8ff93692e8816d442112391a021bdcb874187fdac7a10e6facf54db1f78bb35/hostname",
	        "HostsPath": "/var/lib/docker/containers/b8ff93692e8816d442112391a021bdcb874187fdac7a10e6facf54db1f78bb35/hosts",
	        "LogPath": "/var/lib/docker/containers/b8ff93692e8816d442112391a021bdcb874187fdac7a10e6facf54db1f78bb35/b8ff93692e8816d442112391a021bdcb874187fdac7a10e6facf54db1f78bb35-json.log",
	        "Name": "/ingress-addon-legacy-547465",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-547465:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-547465",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/78277314fe9d2e02ea277dbc44ef771973810e40fcae79bd783320d8be0d388a-init/diff:/var/lib/docker/overlay2/a5364ccac14714ee0f769c339926d51ad0bbde3642ccbcf0e3661d2982bd002b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/78277314fe9d2e02ea277dbc44ef771973810e40fcae79bd783320d8be0d388a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/78277314fe9d2e02ea277dbc44ef771973810e40fcae79bd783320d8be0d388a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/78277314fe9d2e02ea277dbc44ef771973810e40fcae79bd783320d8be0d388a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-547465",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-547465/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-547465",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-547465",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-547465",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0ca26aa306b5b8c4dd21e9b80e074aa4e919def6b271d517874e5a631ee2c1fd",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0ca26aa306b5",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-547465": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b8ff93692e88",
	                        "ingress-addon-legacy-547465"
	                    ],
	                    "NetworkID": "88e4ac74332f96fc68cef86b26c897961c7cb0236c9b94745b7fbdd325b19c9b",
	                    "EndpointID": "6b188eda8bfbfea7b23caa269179992bef0b12046e92a6ab4809bd607ec50d5d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-547465 -n ingress-addon-legacy-547465
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-547465 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-547465 logs -n 25: (1.032291605s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                 |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-436252                 | functional-436252           | jenkins | v1.32.0 | 03 Jan 24 19:08 UTC |                     |
	|                | --kill=true                          |                             |         |         |                     |                     |
	| update-context | functional-436252                    | functional-436252           | jenkins | v1.32.0 | 03 Jan 24 19:08 UTC | 03 Jan 24 19:08 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| service        | functional-436252 service            | functional-436252           | jenkins | v1.32.0 | 03 Jan 24 19:08 UTC | 03 Jan 24 19:08 UTC |
	|                | hello-node-connect --url             |                             |         |         |                     |                     |
	| update-context | functional-436252                    | functional-436252           | jenkins | v1.32.0 | 03 Jan 24 19:08 UTC | 03 Jan 24 19:08 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-436252                    | functional-436252           | jenkins | v1.32.0 | 03 Jan 24 19:08 UTC | 03 Jan 24 19:08 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| image          | functional-436252                    | functional-436252           | jenkins | v1.32.0 | 03 Jan 24 19:08 UTC | 03 Jan 24 19:08 UTC |
	|                | image ls --format short              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-436252                    | functional-436252           | jenkins | v1.32.0 | 03 Jan 24 19:08 UTC | 03 Jan 24 19:08 UTC |
	|                | image ls --format yaml               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| ssh            | functional-436252 ssh pgrep          | functional-436252           | jenkins | v1.32.0 | 03 Jan 24 19:08 UTC |                     |
	|                | buildkitd                            |                             |         |         |                     |                     |
	| image          | functional-436252                    | functional-436252           | jenkins | v1.32.0 | 03 Jan 24 19:08 UTC | 03 Jan 24 19:09 UTC |
	|                | image ls --format json               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-436252 image build -t     | functional-436252           | jenkins | v1.32.0 | 03 Jan 24 19:09 UTC | 03 Jan 24 19:09 UTC |
	|                | localhost/my-image:functional-436252 |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr     |                             |         |         |                     |                     |
	| image          | functional-436252                    | functional-436252           | jenkins | v1.32.0 | 03 Jan 24 19:09 UTC | 03 Jan 24 19:09 UTC |
	|                | image ls --format table              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| service        | functional-436252 service list       | functional-436252           | jenkins | v1.32.0 | 03 Jan 24 19:09 UTC | 03 Jan 24 19:09 UTC |
	| image          | functional-436252 image ls           | functional-436252           | jenkins | v1.32.0 | 03 Jan 24 19:09 UTC | 03 Jan 24 19:09 UTC |
	| service        | functional-436252 service list       | functional-436252           | jenkins | v1.32.0 | 03 Jan 24 19:09 UTC | 03 Jan 24 19:09 UTC |
	|                | -o json                              |                             |         |         |                     |                     |
	| service        | functional-436252 service            | functional-436252           | jenkins | v1.32.0 | 03 Jan 24 19:09 UTC | 03 Jan 24 19:09 UTC |
	|                | --namespace=default --https          |                             |         |         |                     |                     |
	|                | --url hello-node                     |                             |         |         |                     |                     |
	| service        | functional-436252                    | functional-436252           | jenkins | v1.32.0 | 03 Jan 24 19:09 UTC | 03 Jan 24 19:09 UTC |
	|                | service hello-node --url             |                             |         |         |                     |                     |
	|                | --format={{.IP}}                     |                             |         |         |                     |                     |
	| service        | functional-436252 service            | functional-436252           | jenkins | v1.32.0 | 03 Jan 24 19:09 UTC | 03 Jan 24 19:09 UTC |
	|                | hello-node --url                     |                             |         |         |                     |                     |
	| delete         | -p functional-436252                 | functional-436252           | jenkins | v1.32.0 | 03 Jan 24 19:09 UTC | 03 Jan 24 19:09 UTC |
	| start          | -p ingress-addon-legacy-547465       | ingress-addon-legacy-547465 | jenkins | v1.32.0 | 03 Jan 24 19:09 UTC | 03 Jan 24 19:10 UTC |
	|                | --kubernetes-version=v1.18.20        |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true            |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-547465          | ingress-addon-legacy-547465 | jenkins | v1.32.0 | 03 Jan 24 19:10 UTC | 03 Jan 24 19:10 UTC |
	|                | addons enable ingress                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-547465          | ingress-addon-legacy-547465 | jenkins | v1.32.0 | 03 Jan 24 19:10 UTC | 03 Jan 24 19:10 UTC |
	|                | addons enable ingress-dns            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-547465          | ingress-addon-legacy-547465 | jenkins | v1.32.0 | 03 Jan 24 19:11 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/        |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'         |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-547465 ip       | ingress-addon-legacy-547465 | jenkins | v1.32.0 | 03 Jan 24 19:13 UTC | 03 Jan 24 19:13 UTC |
	| addons         | ingress-addon-legacy-547465          | ingress-addon-legacy-547465 | jenkins | v1.32.0 | 03 Jan 24 19:13 UTC | 03 Jan 24 19:13 UTC |
	|                | addons disable ingress-dns           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-547465          | ingress-addon-legacy-547465 | jenkins | v1.32.0 | 03 Jan 24 19:13 UTC | 03 Jan 24 19:13 UTC |
	|                | addons disable ingress               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 19:09:10
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 19:09:10.165495   56334 out.go:296] Setting OutFile to fd 1 ...
	I0103 19:09:10.165760   56334 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:09:10.165770   56334 out.go:309] Setting ErrFile to fd 2...
	I0103 19:09:10.165774   56334 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:09:10.165967   56334 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-8915/.minikube/bin
	I0103 19:09:10.166554   56334 out.go:303] Setting JSON to false
	I0103 19:09:10.168034   56334 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3096,"bootTime":1704305854,"procs":679,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 19:09:10.168104   56334 start.go:138] virtualization: kvm guest
	I0103 19:09:10.170977   56334 out.go:177] * [ingress-addon-legacy-547465] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 19:09:10.172727   56334 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 19:09:10.172726   56334 notify.go:220] Checking for updates...
	I0103 19:09:10.174400   56334 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 19:09:10.176148   56334 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-8915/kubeconfig
	I0103 19:09:10.177800   56334 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-8915/.minikube
	I0103 19:09:10.179249   56334 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 19:09:10.180677   56334 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 19:09:10.182412   56334 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 19:09:10.204194   56334 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 19:09:10.204340   56334 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 19:09:10.254998   56334 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2024-01-03 19:09:10.246417605 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0103 19:09:10.255092   56334 docker.go:295] overlay module found
	I0103 19:09:10.257390   56334 out.go:177] * Using the docker driver based on user configuration
	I0103 19:09:10.259092   56334 start.go:298] selected driver: docker
	I0103 19:09:10.259110   56334 start.go:902] validating driver "docker" against <nil>
	I0103 19:09:10.259125   56334 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 19:09:10.259861   56334 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 19:09:10.310084   56334 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2024-01-03 19:09:10.301836126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0103 19:09:10.310306   56334 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0103 19:09:10.310535   56334 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0103 19:09:10.312960   56334 out.go:177] * Using Docker driver with root privileges
	I0103 19:09:10.314635   56334 cni.go:84] Creating CNI manager for ""
	I0103 19:09:10.314660   56334 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 19:09:10.314672   56334 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0103 19:09:10.314684   56334 start_flags.go:323] config:
	{Name:ingress-addon-legacy-547465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-547465 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:09:10.316556   56334 out.go:177] * Starting control plane node ingress-addon-legacy-547465 in cluster ingress-addon-legacy-547465
	I0103 19:09:10.318184   56334 cache.go:121] Beginning downloading kic base image for docker with crio
	I0103 19:09:10.319597   56334 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0103 19:09:10.321013   56334 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0103 19:09:10.321037   56334 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 19:09:10.336560   56334 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0103 19:09:10.336587   56334 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0103 19:09:10.435151   56334 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0103 19:09:10.435199   56334 cache.go:56] Caching tarball of preloaded images
	I0103 19:09:10.435384   56334 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0103 19:09:10.437611   56334 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0103 19:09:10.439182   56334 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0103 19:09:10.550389   56334 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17885-8915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0103 19:09:26.411351   56334 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0103 19:09:26.411452   56334 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17885-8915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0103 19:09:27.417056   56334 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0103 19:09:27.417398   56334 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/config.json ...
	I0103 19:09:27.417428   56334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/config.json: {Name:mkfce00f27be72f6b926b5b7263cbbf73425fc41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:09:27.417605   56334 cache.go:194] Successfully downloaded all kic artifacts
	I0103 19:09:27.417636   56334 start.go:365] acquiring machines lock for ingress-addon-legacy-547465: {Name:mkbf954aa8af2043fb310e4c1a8f3ecb4a971229 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:09:27.417677   56334 start.go:369] acquired machines lock for "ingress-addon-legacy-547465" in 29.514µs
	I0103 19:09:27.417695   56334 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-547465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-547465 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 19:09:27.417764   56334 start.go:125] createHost starting for "" (driver="docker")
	I0103 19:09:27.420528   56334 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0103 19:09:27.420742   56334 start.go:159] libmachine.API.Create for "ingress-addon-legacy-547465" (driver="docker")
	I0103 19:09:27.420779   56334 client.go:168] LocalClient.Create starting
	I0103 19:09:27.420853   56334 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem
	I0103 19:09:27.420889   56334 main.go:141] libmachine: Decoding PEM data...
	I0103 19:09:27.420918   56334 main.go:141] libmachine: Parsing certificate...
	I0103 19:09:27.420968   56334 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-8915/.minikube/certs/cert.pem
	I0103 19:09:27.421001   56334 main.go:141] libmachine: Decoding PEM data...
	I0103 19:09:27.421012   56334 main.go:141] libmachine: Parsing certificate...
	I0103 19:09:27.421301   56334 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-547465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0103 19:09:27.436841   56334 cli_runner.go:211] docker network inspect ingress-addon-legacy-547465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0103 19:09:27.436907   56334 network_create.go:281] running [docker network inspect ingress-addon-legacy-547465] to gather additional debugging logs...
	I0103 19:09:27.436930   56334 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-547465
	W0103 19:09:27.451086   56334 cli_runner.go:211] docker network inspect ingress-addon-legacy-547465 returned with exit code 1
	I0103 19:09:27.451124   56334 network_create.go:284] error running [docker network inspect ingress-addon-legacy-547465]: docker network inspect ingress-addon-legacy-547465: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-547465 not found
	I0103 19:09:27.451139   56334 network_create.go:286] output of [docker network inspect ingress-addon-legacy-547465]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-547465 not found
	
	** /stderr **
	I0103 19:09:27.451239   56334 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0103 19:09:27.466532   56334 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020a9260}
	I0103 19:09:27.466563   56334 network_create.go:124] attempt to create docker network ingress-addon-legacy-547465 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0103 19:09:27.466603   56334 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-547465 ingress-addon-legacy-547465
	I0103 19:09:27.518084   56334 network_create.go:108] docker network ingress-addon-legacy-547465 192.168.49.0/24 created
	I0103 19:09:27.518115   56334 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-547465" container
	I0103 19:09:27.518217   56334 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0103 19:09:27.532813   56334 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-547465 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-547465 --label created_by.minikube.sigs.k8s.io=true
	I0103 19:09:27.549416   56334 oci.go:103] Successfully created a docker volume ingress-addon-legacy-547465
	I0103 19:09:27.549489   56334 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-547465-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-547465 --entrypoint /usr/bin/test -v ingress-addon-legacy-547465:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I0103 19:09:29.282752   56334 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-547465-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-547465 --entrypoint /usr/bin/test -v ingress-addon-legacy-547465:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib: (1.733191173s)
	I0103 19:09:29.282781   56334 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-547465
	I0103 19:09:29.282803   56334 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0103 19:09:29.282833   56334 kic.go:194] Starting extracting preloaded images to volume ...
	I0103 19:09:29.282896   56334 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17885-8915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-547465:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I0103 19:09:34.555645   56334 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17885-8915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-547465:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (5.272701808s)
	I0103 19:09:34.555677   56334 kic.go:203] duration metric: took 5.272842 seconds to extract preloaded images to volume
	W0103 19:09:34.555825   56334 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0103 19:09:34.555943   56334 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0103 19:09:34.608126   56334 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-547465 --name ingress-addon-legacy-547465 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-547465 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-547465 --network ingress-addon-legacy-547465 --ip 192.168.49.2 --volume ingress-addon-legacy-547465:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0103 19:09:34.913459   56334 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-547465 --format={{.State.Running}}
	I0103 19:09:34.930527   56334 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-547465 --format={{.State.Status}}
	I0103 19:09:34.949325   56334 cli_runner.go:164] Run: docker exec ingress-addon-legacy-547465 stat /var/lib/dpkg/alternatives/iptables
	I0103 19:09:35.007655   56334 oci.go:144] the created container "ingress-addon-legacy-547465" has a running status.
	I0103 19:09:35.007686   56334 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17885-8915/.minikube/machines/ingress-addon-legacy-547465/id_rsa...
	I0103 19:09:35.178462   56334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/machines/ingress-addon-legacy-547465/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0103 19:09:35.178519   56334 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17885-8915/.minikube/machines/ingress-addon-legacy-547465/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0103 19:09:35.197366   56334 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-547465 --format={{.State.Status}}
	I0103 19:09:35.213482   56334 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0103 19:09:35.213528   56334 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-547465 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0103 19:09:35.278679   56334 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-547465 --format={{.State.Status}}
	I0103 19:09:35.296647   56334 machine.go:88] provisioning docker machine ...
	I0103 19:09:35.296693   56334 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-547465"
	I0103 19:09:35.296763   56334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-547465
	I0103 19:09:35.318172   56334 main.go:141] libmachine: Using SSH client type: native
	I0103 19:09:35.318533   56334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0103 19:09:35.318553   56334 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-547465 && echo "ingress-addon-legacy-547465" | sudo tee /etc/hostname
	I0103 19:09:35.586118   56334 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-547465
	
	I0103 19:09:35.586213   56334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-547465
	I0103 19:09:35.602766   56334 main.go:141] libmachine: Using SSH client type: native
	I0103 19:09:35.603126   56334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0103 19:09:35.603148   56334 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-547465' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-547465/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-547465' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 19:09:35.721909   56334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 19:09:35.721936   56334 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17885-8915/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-8915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-8915/.minikube}
	I0103 19:09:35.721959   56334 ubuntu.go:177] setting up certificates
	I0103 19:09:35.721968   56334 provision.go:83] configureAuth start
	I0103 19:09:35.722019   56334 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-547465
	I0103 19:09:35.739206   56334 provision.go:138] copyHostCerts
	I0103 19:09:35.739246   56334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17885-8915/.minikube/ca.pem
	I0103 19:09:35.739278   56334 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-8915/.minikube/ca.pem, removing ...
	I0103 19:09:35.739285   56334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-8915/.minikube/ca.pem
	I0103 19:09:35.739352   56334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-8915/.minikube/ca.pem (1078 bytes)
	I0103 19:09:35.739422   56334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17885-8915/.minikube/cert.pem
	I0103 19:09:35.739439   56334 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-8915/.minikube/cert.pem, removing ...
	I0103 19:09:35.739443   56334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-8915/.minikube/cert.pem
	I0103 19:09:35.739471   56334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-8915/.minikube/cert.pem (1123 bytes)
	I0103 19:09:35.739512   56334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17885-8915/.minikube/key.pem
	I0103 19:09:35.739537   56334 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-8915/.minikube/key.pem, removing ...
	I0103 19:09:35.739542   56334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-8915/.minikube/key.pem
	I0103 19:09:35.739562   56334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-8915/.minikube/key.pem (1679 bytes)
	I0103 19:09:35.739606   56334 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-8915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-547465 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-547465]
	I0103 19:09:35.967390   56334 provision.go:172] copyRemoteCerts
	I0103 19:09:35.967450   56334 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 19:09:35.967483   56334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-547465
	I0103 19:09:35.983555   56334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/ingress-addon-legacy-547465/id_rsa Username:docker}
	I0103 19:09:36.074065   56334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0103 19:09:36.074121   56334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 19:09:36.094049   56334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0103 19:09:36.094105   56334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0103 19:09:36.115267   56334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0103 19:09:36.115335   56334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 19:09:36.136281   56334 provision.go:86] duration metric: configureAuth took 414.298543ms
	I0103 19:09:36.136310   56334 ubuntu.go:193] setting minikube options for container-runtime
	I0103 19:09:36.136490   56334 config.go:182] Loaded profile config "ingress-addon-legacy-547465": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0103 19:09:36.136618   56334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-547465
	I0103 19:09:36.153250   56334 main.go:141] libmachine: Using SSH client type: native
	I0103 19:09:36.153582   56334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0103 19:09:36.153639   56334 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 19:09:36.375370   56334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 19:09:36.375402   56334 machine.go:91] provisioned docker machine in 1.078725928s
	I0103 19:09:36.375414   56334 client.go:171] LocalClient.Create took 8.954620964s
	I0103 19:09:36.375437   56334 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-547465" took 8.954694433s
	I0103 19:09:36.375446   56334 start.go:300] post-start starting for "ingress-addon-legacy-547465" (driver="docker")
	I0103 19:09:36.375454   56334 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 19:09:36.375517   56334 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 19:09:36.375559   56334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-547465
	I0103 19:09:36.391817   56334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/ingress-addon-legacy-547465/id_rsa Username:docker}
	I0103 19:09:36.482627   56334 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 19:09:36.485450   56334 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0103 19:09:36.485484   56334 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0103 19:09:36.485497   56334 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0103 19:09:36.485506   56334 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0103 19:09:36.485520   56334 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-8915/.minikube/addons for local assets ...
	I0103 19:09:36.485574   56334 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-8915/.minikube/files for local assets ...
	I0103 19:09:36.485656   56334 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/ssl/certs/156702.pem -> 156702.pem in /etc/ssl/certs
	I0103 19:09:36.485666   56334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/ssl/certs/156702.pem -> /etc/ssl/certs/156702.pem
	I0103 19:09:36.485771   56334 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 19:09:36.493000   56334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/ssl/certs/156702.pem --> /etc/ssl/certs/156702.pem (1708 bytes)
	I0103 19:09:36.513123   56334 start.go:303] post-start completed in 137.665282ms
	I0103 19:09:36.513416   56334 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-547465
	I0103 19:09:36.530098   56334 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/config.json ...
	I0103 19:09:36.530355   56334 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 19:09:36.530392   56334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-547465
	I0103 19:09:36.545881   56334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/ingress-addon-legacy-547465/id_rsa Username:docker}
	I0103 19:09:36.630592   56334 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0103 19:09:36.634551   56334 start.go:128] duration metric: createHost completed in 9.216775466s
	I0103 19:09:36.634578   56334 start.go:83] releasing machines lock for "ingress-addon-legacy-547465", held for 9.216890351s
	I0103 19:09:36.634641   56334 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-547465
	I0103 19:09:36.651035   56334 ssh_runner.go:195] Run: cat /version.json
	I0103 19:09:36.651086   56334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-547465
	I0103 19:09:36.651124   56334 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 19:09:36.651183   56334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-547465
	I0103 19:09:36.667134   56334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/ingress-addon-legacy-547465/id_rsa Username:docker}
	I0103 19:09:36.668307   56334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/ingress-addon-legacy-547465/id_rsa Username:docker}
	I0103 19:09:36.836753   56334 ssh_runner.go:195] Run: systemctl --version
	I0103 19:09:36.840939   56334 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 19:09:36.975263   56334 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 19:09:36.979266   56334 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 19:09:36.995953   56334 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0103 19:09:36.996035   56334 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 19:09:37.021290   56334 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0103 19:09:37.021316   56334 start.go:475] detecting cgroup driver to use...
	I0103 19:09:37.021352   56334 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 19:09:37.021400   56334 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 19:09:37.034395   56334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 19:09:37.043827   56334 docker.go:203] disabling cri-docker service (if available) ...
	I0103 19:09:37.043877   56334 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 19:09:37.054746   56334 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 19:09:37.066725   56334 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 19:09:37.139777   56334 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 19:09:37.214763   56334 docker.go:219] disabling docker service ...
	I0103 19:09:37.214835   56334 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 19:09:37.231196   56334 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 19:09:37.241132   56334 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 19:09:37.318399   56334 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 19:09:37.396170   56334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 19:09:37.406001   56334 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 19:09:37.419431   56334 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0103 19:09:37.419499   56334 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:09:37.427854   56334 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 19:09:37.427911   56334 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:09:37.435949   56334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:09:37.443671   56334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:09:37.451687   56334 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 19:09:37.459222   56334 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 19:09:37.466052   56334 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 19:09:37.472864   56334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 19:09:37.542125   56334 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 19:09:37.652228   56334 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 19:09:37.652297   56334 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 19:09:37.655762   56334 start.go:543] Will wait 60s for crictl version
	I0103 19:09:37.655813   56334 ssh_runner.go:195] Run: which crictl
	I0103 19:09:37.658745   56334 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 19:09:37.688314   56334 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0103 19:09:37.688385   56334 ssh_runner.go:195] Run: crio --version
	I0103 19:09:37.720069   56334 ssh_runner.go:195] Run: crio --version
	I0103 19:09:37.753371   56334 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0103 19:09:37.754913   56334 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-547465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0103 19:09:37.770687   56334 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0103 19:09:37.774065   56334 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 19:09:37.783469   56334 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0103 19:09:37.783531   56334 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 19:09:37.825622   56334 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0103 19:09:37.825697   56334 ssh_runner.go:195] Run: which lz4
	I0103 19:09:37.828874   56334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0103 19:09:37.828958   56334 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0103 19:09:37.831899   56334 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 19:09:37.831929   56334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0103 19:09:38.737400   56334 crio.go:444] Took 0.908468 seconds to copy over tarball
	I0103 19:09:38.737465   56334 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 19:09:41.044107   56334 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.306602515s)
	I0103 19:09:41.044140   56334 crio.go:451] Took 2.306717 seconds to extract the tarball
	I0103 19:09:41.044147   56334 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 19:09:41.112555   56334 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 19:09:41.142951   56334 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0103 19:09:41.142978   56334 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0103 19:09:41.143042   56334 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0103 19:09:41.143087   56334 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0103 19:09:41.143087   56334 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0103 19:09:41.143099   56334 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0103 19:09:41.143111   56334 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0103 19:09:41.143036   56334 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 19:09:41.143095   56334 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0103 19:09:41.143318   56334 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0103 19:09:41.144203   56334 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0103 19:09:41.144228   56334 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0103 19:09:41.144279   56334 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0103 19:09:41.144289   56334 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0103 19:09:41.144289   56334 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0103 19:09:41.144249   56334 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0103 19:09:41.144197   56334 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0103 19:09:41.144268   56334 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 19:09:41.300712   56334 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0103 19:09:41.314441   56334 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0103 19:09:41.320479   56334 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0103 19:09:41.323690   56334 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0103 19:09:41.325888   56334 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0103 19:09:41.335368   56334 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0103 19:09:41.339599   56334 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0103 19:09:41.339643   56334 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0103 19:09:41.339696   56334 ssh_runner.go:195] Run: which crictl
	I0103 19:09:41.347675   56334 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0103 19:09:41.384263   56334 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0103 19:09:41.384322   56334 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0103 19:09:41.384390   56334 ssh_runner.go:195] Run: which crictl
	I0103 19:09:41.390562   56334 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0103 19:09:41.390597   56334 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0103 19:09:41.390642   56334 ssh_runner.go:195] Run: which crictl
	I0103 19:09:41.440281   56334 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0103 19:09:41.440307   56334 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0103 19:09:41.440327   56334 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0103 19:09:41.440337   56334 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0103 19:09:41.440363   56334 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0103 19:09:41.440373   56334 ssh_runner.go:195] Run: which crictl
	I0103 19:09:41.440373   56334 ssh_runner.go:195] Run: which crictl
	I0103 19:09:41.440385   56334 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0103 19:09:41.440406   56334 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0103 19:09:41.440421   56334 ssh_runner.go:195] Run: which crictl
	I0103 19:09:41.460097   56334 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0103 19:09:41.460138   56334 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0103 19:09:41.460192   56334 ssh_runner.go:195] Run: which crictl
	I0103 19:09:41.460297   56334 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0103 19:09:41.460374   56334 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0103 19:09:41.460393   56334 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0103 19:09:41.486650   56334 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0103 19:09:41.486711   56334 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0103 19:09:41.486771   56334 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0103 19:09:41.581529   56334 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0103 19:09:41.581571   56334 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0103 19:09:41.581628   56334 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0103 19:09:41.581688   56334 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0103 19:09:41.592996   56334 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0103 19:09:41.593070   56334 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0103 19:09:41.613311   56334 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0103 19:09:42.015498   56334 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 19:09:42.150791   56334 cache_images.go:92] LoadImages completed in 1.007798286s
	W0103 19:09:42.150878   56334 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7: no such file or directory
	I0103 19:09:42.150936   56334 ssh_runner.go:195] Run: crio config
	I0103 19:09:42.189976   56334 cni.go:84] Creating CNI manager for ""
	I0103 19:09:42.190003   56334 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 19:09:42.190018   56334 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 19:09:42.190037   56334 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-547465 NodeName:ingress-addon-legacy-547465 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0103 19:09:42.190202   56334 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-547465"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 19:09:42.190266   56334 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-547465 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-547465 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 19:09:42.190312   56334 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0103 19:09:42.198282   56334 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 19:09:42.198349   56334 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 19:09:42.205632   56334 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0103 19:09:42.220559   56334 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0103 19:09:42.235282   56334 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0103 19:09:42.249768   56334 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0103 19:09:42.252537   56334 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 19:09:42.261281   56334 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465 for IP: 192.168.49.2
	I0103 19:09:42.261302   56334 certs.go:190] acquiring lock for shared ca certs: {Name:mk5aa238e4284ee43cf20f760a8d5a161bd1dece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:09:42.261423   56334 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-8915/.minikube/ca.key
	I0103 19:09:42.261467   56334 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-8915/.minikube/proxy-client-ca.key
	I0103 19:09:42.261508   56334 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.key
	I0103 19:09:42.261519   56334 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt with IP's: []
	I0103 19:09:42.383760   56334 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt ...
	I0103 19:09:42.383788   56334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt: {Name:mk769796c819695a6961db69e7bb1573299d10fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:09:42.383943   56334 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.key ...
	I0103 19:09:42.383956   56334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.key: {Name:mk1f7ad5a60cb0d0bc40c0aabe1db6ff8d93de9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:09:42.384024   56334 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/apiserver.key.dd3b5fb2
	I0103 19:09:42.384058   56334 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0103 19:09:42.436266   56334 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/apiserver.crt.dd3b5fb2 ...
	I0103 19:09:42.436292   56334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/apiserver.crt.dd3b5fb2: {Name:mk13a65bb971b1c2cc617b43bedc6a392f832f1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:09:42.436430   56334 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/apiserver.key.dd3b5fb2 ...
	I0103 19:09:42.436444   56334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/apiserver.key.dd3b5fb2: {Name:mkb10ac696248ab38b61a9fe6c5091713a843183 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:09:42.436504   56334 certs.go:337] copying /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/apiserver.crt
	I0103 19:09:42.436571   56334 certs.go:341] copying /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/apiserver.key
	I0103 19:09:42.436622   56334 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/proxy-client.key
	I0103 19:09:42.436656   56334 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/proxy-client.crt with IP's: []
	I0103 19:09:42.563672   56334 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/proxy-client.crt ...
	I0103 19:09:42.563698   56334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/proxy-client.crt: {Name:mk922dd1e89d3df1fffed465be62f5908d05a451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:09:42.563849   56334 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/proxy-client.key ...
	I0103 19:09:42.563862   56334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/proxy-client.key: {Name:mkfcc0e7d443a9141d177643e6e522624339ae79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:09:42.563926   56334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0103 19:09:42.563943   56334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0103 19:09:42.563957   56334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0103 19:09:42.563969   56334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0103 19:09:42.563981   56334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0103 19:09:42.563991   56334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0103 19:09:42.564000   56334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0103 19:09:42.564012   56334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0103 19:09:42.564064   56334 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/home/jenkins/minikube-integration/17885-8915/.minikube/certs/15670.pem (1338 bytes)
	W0103 19:09:42.564099   56334 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-8915/.minikube/certs/home/jenkins/minikube-integration/17885-8915/.minikube/certs/15670_empty.pem, impossibly tiny 0 bytes
	I0103 19:09:42.564111   56334 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 19:09:42.564131   56334 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem (1078 bytes)
	I0103 19:09:42.564153   56334 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/home/jenkins/minikube-integration/17885-8915/.minikube/certs/cert.pem (1123 bytes)
	I0103 19:09:42.564177   56334 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/home/jenkins/minikube-integration/17885-8915/.minikube/certs/key.pem (1679 bytes)
	I0103 19:09:42.564225   56334 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/ssl/certs/156702.pem (1708 bytes)
	I0103 19:09:42.564251   56334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/ssl/certs/156702.pem -> /usr/share/ca-certificates/156702.pem
	I0103 19:09:42.564264   56334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:09:42.564276   56334 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/15670.pem -> /usr/share/ca-certificates/15670.pem
	I0103 19:09:42.564848   56334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 19:09:42.585824   56334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 19:09:42.605974   56334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 19:09:42.625815   56334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0103 19:09:42.646224   56334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 19:09:42.666179   56334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0103 19:09:42.685550   56334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 19:09:42.705316   56334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0103 19:09:42.725204   56334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/ssl/certs/156702.pem --> /usr/share/ca-certificates/156702.pem (1708 bytes)
	I0103 19:09:42.746023   56334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 19:09:42.766205   56334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/certs/15670.pem --> /usr/share/ca-certificates/15670.pem (1338 bytes)
	I0103 19:09:42.785748   56334 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 19:09:42.800030   56334 ssh_runner.go:195] Run: openssl version
	I0103 19:09:42.804611   56334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/156702.pem && ln -fs /usr/share/ca-certificates/156702.pem /etc/ssl/certs/156702.pem"
	I0103 19:09:42.812256   56334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/156702.pem
	I0103 19:09:42.815140   56334 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:05 /usr/share/ca-certificates/156702.pem
	I0103 19:09:42.815186   56334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/156702.pem
	I0103 19:09:42.821103   56334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/156702.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 19:09:42.828668   56334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 19:09:42.836246   56334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:09:42.839185   56334 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:59 /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:09:42.839223   56334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:09:42.845136   56334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 19:09:42.852753   56334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15670.pem && ln -fs /usr/share/ca-certificates/15670.pem /etc/ssl/certs/15670.pem"
	I0103 19:09:42.860198   56334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15670.pem
	I0103 19:09:42.862978   56334 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:05 /usr/share/ca-certificates/15670.pem
	I0103 19:09:42.863019   56334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15670.pem
	I0103 19:09:42.868722   56334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15670.pem /etc/ssl/certs/51391683.0"
	I0103 19:09:42.876200   56334 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 19:09:42.879055   56334 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 19:09:42.879137   56334 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-547465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-547465 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:09:42.879230   56334 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 19:09:42.879285   56334 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 19:09:42.910308   56334 cri.go:89] found id: ""
	I0103 19:09:42.910377   56334 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 19:09:42.918630   56334 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 19:09:42.925903   56334 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0103 19:09:42.925957   56334 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 19:09:42.933015   56334 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 19:09:42.933052   56334 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0103 19:09:42.974114   56334 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0103 19:09:42.974191   56334 kubeadm.go:322] [preflight] Running pre-flight checks
	I0103 19:09:43.009711   56334 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0103 19:09:43.009809   56334 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I0103 19:09:43.009865   56334 kubeadm.go:322] OS: Linux
	I0103 19:09:43.009937   56334 kubeadm.go:322] CGROUPS_CPU: enabled
	I0103 19:09:43.010000   56334 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0103 19:09:43.010060   56334 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0103 19:09:43.010152   56334 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0103 19:09:43.010218   56334 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0103 19:09:43.010283   56334 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0103 19:09:43.073780   56334 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 19:09:43.073899   56334 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 19:09:43.073999   56334 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 19:09:43.243291   56334 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 19:09:43.244348   56334 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 19:09:43.244421   56334 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0103 19:09:43.317391   56334 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 19:09:43.322557   56334 out.go:204]   - Generating certificates and keys ...
	I0103 19:09:43.322661   56334 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0103 19:09:43.322732   56334 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0103 19:09:43.513553   56334 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0103 19:09:43.603812   56334 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0103 19:09:43.821617   56334 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0103 19:09:43.910747   56334 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0103 19:09:43.959878   56334 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0103 19:09:43.960056   56334 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-547465 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0103 19:09:44.078901   56334 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0103 19:09:44.079066   56334 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-547465 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0103 19:09:44.299641   56334 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0103 19:09:44.606297   56334 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0103 19:09:44.835589   56334 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0103 19:09:44.835705   56334 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 19:09:44.996140   56334 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 19:09:45.200983   56334 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 19:09:45.878627   56334 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 19:09:45.985544   56334 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 19:09:45.986085   56334 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 19:09:45.988053   56334 out.go:204]   - Booting up control plane ...
	I0103 19:09:45.988154   56334 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 19:09:45.992138   56334 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 19:09:45.993333   56334 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 19:09:45.994520   56334 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 19:09:45.996554   56334 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0103 19:09:51.998911   56334 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.002389 seconds
	I0103 19:09:51.999066   56334 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0103 19:09:52.008972   56334 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0103 19:09:52.524213   56334 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0103 19:09:52.524401   56334 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-547465 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0103 19:09:53.032452   56334 kubeadm.go:322] [bootstrap-token] Using token: kl8ozn.dv2u6pxttsj72dto
	I0103 19:09:53.033926   56334 out.go:204]   - Configuring RBAC rules ...
	I0103 19:09:53.034029   56334 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0103 19:09:53.036890   56334 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0103 19:09:53.042690   56334 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0103 19:09:53.044412   56334 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0103 19:09:53.046073   56334 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0103 19:09:53.047728   56334 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0103 19:09:53.054080   56334 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0103 19:09:53.204503   56334 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0103 19:09:53.445386   56334 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0103 19:09:53.446306   56334 kubeadm.go:322] 
	I0103 19:09:53.446425   56334 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0103 19:09:53.446445   56334 kubeadm.go:322] 
	I0103 19:09:53.446554   56334 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0103 19:09:53.446563   56334 kubeadm.go:322] 
	I0103 19:09:53.446598   56334 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0103 19:09:53.446695   56334 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0103 19:09:53.446772   56334 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0103 19:09:53.446782   56334 kubeadm.go:322] 
	I0103 19:09:53.446854   56334 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0103 19:09:53.446950   56334 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0103 19:09:53.447013   56334 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0103 19:09:53.447024   56334 kubeadm.go:322] 
	I0103 19:09:53.447090   56334 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0103 19:09:53.447157   56334 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0103 19:09:53.447163   56334 kubeadm.go:322] 
	I0103 19:09:53.447229   56334 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token kl8ozn.dv2u6pxttsj72dto \
	I0103 19:09:53.447338   56334 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:6fada62843f74bbefe3d0de7ede254ffc49257c2300ab57b02a33590c9b388a1 \
	I0103 19:09:53.447379   56334 kubeadm.go:322]     --control-plane 
	I0103 19:09:53.447389   56334 kubeadm.go:322] 
	I0103 19:09:53.447496   56334 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0103 19:09:53.447507   56334 kubeadm.go:322] 
	I0103 19:09:53.447614   56334 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token kl8ozn.dv2u6pxttsj72dto \
	I0103 19:09:53.447755   56334 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:6fada62843f74bbefe3d0de7ede254ffc49257c2300ab57b02a33590c9b388a1 
	I0103 19:09:53.449623   56334 kubeadm.go:322] W0103 19:09:42.973647    1382 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0103 19:09:53.449877   56334 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I0103 19:09:53.450035   56334 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 19:09:53.450179   56334 kubeadm.go:322] W0103 19:09:45.991900    1382 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0103 19:09:53.450286   56334 kubeadm.go:322] W0103 19:09:45.993227    1382 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0103 19:09:53.450306   56334 cni.go:84] Creating CNI manager for ""
	I0103 19:09:53.450315   56334 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 19:09:53.452066   56334 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0103 19:09:53.453469   56334 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0103 19:09:53.457302   56334 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0103 19:09:53.457319   56334 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0103 19:09:53.473336   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0103 19:09:53.890375   56334 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 19:09:53.890460   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:09:53.890492   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a minikube.k8s.io/name=ingress-addon-legacy-547465 minikube.k8s.io/updated_at=2024_01_03T19_09_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:09:53.998707   56334 ops.go:34] apiserver oom_adj: -16
	I0103 19:09:53.998718   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:09:54.499145   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:09:54.999157   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:09:55.499327   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:09:55.998809   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:09:56.499186   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:09:56.998844   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:09:57.499386   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:09:57.998816   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:09:58.499243   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:09:58.999344   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:09:59.499049   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:09:59.999706   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:10:00.498904   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:10:00.999347   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:10:01.499335   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:10:01.999391   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:10:02.498977   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:10:02.999363   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:10:03.499335   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:10:03.999209   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:10:04.499359   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:10:04.998802   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:10:05.498773   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:10:05.999328   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:10:06.499009   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:10:06.998932   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:10:07.499384   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:10:07.999345   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:10:08.498993   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:10:08.999366   56334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:10:09.063956   56334 kubeadm.go:1088] duration metric: took 15.173557515s to wait for elevateKubeSystemPrivileges.
	I0103 19:10:09.063988   56334 kubeadm.go:406] StartCluster complete in 26.184882395s
	I0103 19:10:09.064005   56334 settings.go:142] acquiring lock: {Name:mk6273be8cd3d06b021992a8bd25ebbd6366b42d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:10:09.064059   56334 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-8915/kubeconfig
	I0103 19:10:09.064724   56334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/kubeconfig: {Name:mke772e93691b15e3e729ce43b6e84f73895395b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:10:09.064966   56334 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 19:10:09.065000   56334 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 19:10:09.065091   56334 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-547465"
	I0103 19:10:09.065100   56334 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-547465"
	I0103 19:10:09.065114   56334 addons.go:237] Setting addon storage-provisioner=true in "ingress-addon-legacy-547465"
	I0103 19:10:09.065131   56334 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-547465"
	I0103 19:10:09.065166   56334 host.go:66] Checking if "ingress-addon-legacy-547465" exists ...
	I0103 19:10:09.065202   56334 config.go:182] Loaded profile config "ingress-addon-legacy-547465": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0103 19:10:09.065537   56334 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-547465 --format={{.State.Status}}
	I0103 19:10:09.065627   56334 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-547465 --format={{.State.Status}}
	I0103 19:10:09.065707   56334 kapi.go:59] client config for ingress-addon-legacy-547465: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.key", CAFile:"/home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:10:09.066436   56334 cert_rotation.go:137] Starting client certificate rotation controller
	I0103 19:10:09.090739   56334 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 19:10:09.089939   56334 kapi.go:59] client config for ingress-addon-legacy-547465: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.key", CAFile:"/home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:10:09.092296   56334 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 19:10:09.092312   56334 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 19:10:09.092376   56334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-547465
	I0103 19:10:09.092607   56334 addons.go:237] Setting addon default-storageclass=true in "ingress-addon-legacy-547465"
	I0103 19:10:09.092650   56334 host.go:66] Checking if "ingress-addon-legacy-547465" exists ...
	I0103 19:10:09.093265   56334 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-547465 --format={{.State.Status}}
	I0103 19:10:09.113598   56334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/ingress-addon-legacy-547465/id_rsa Username:docker}
	I0103 19:10:09.118400   56334 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 19:10:09.118422   56334 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 19:10:09.118480   56334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-547465
	I0103 19:10:09.136816   56334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/ingress-addon-legacy-547465/id_rsa Username:docker}
	I0103 19:10:09.191531   56334 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0103 19:10:09.287506   56334 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 19:10:09.294325   56334 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 19:10:09.481827   56334 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0103 19:10:09.576370   56334 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-547465" context rescaled to 1 replicas
	I0103 19:10:09.576432   56334 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 19:10:09.578920   56334 out.go:177] * Verifying Kubernetes components...
	I0103 19:10:09.580346   56334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:10:09.676476   56334 kapi.go:59] client config for ingress-addon-legacy-547465: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.key", CAFile:"/home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:10:09.676836   56334 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-547465" to be "Ready" ...
	I0103 19:10:09.682779   56334 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0103 19:10:09.684220   56334 addons.go:508] enable addons completed in 619.221814ms: enabled=[storage-provisioner default-storageclass]
	I0103 19:10:11.679927   56334 node_ready.go:58] node "ingress-addon-legacy-547465" has status "Ready":"False"
	I0103 19:10:13.715827   56334 node_ready.go:49] node "ingress-addon-legacy-547465" has status "Ready":"True"
	I0103 19:10:13.715854   56334 node_ready.go:38] duration metric: took 4.03899007s waiting for node "ingress-addon-legacy-547465" to be "Ready" ...
	I0103 19:10:13.715868   56334 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:10:13.922514   56334 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-pslsp" in "kube-system" namespace to be "Ready" ...
	I0103 19:10:15.926039   56334 pod_ready.go:102] pod "coredns-66bff467f8-pslsp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-03 19:10:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0103 19:10:18.425618   56334 pod_ready.go:102] pod "coredns-66bff467f8-pslsp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-03 19:10:08 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0103 19:10:20.428105   56334 pod_ready.go:102] pod "coredns-66bff467f8-pslsp" in "kube-system" namespace has status "Ready":"False"
	I0103 19:10:22.428348   56334 pod_ready.go:102] pod "coredns-66bff467f8-pslsp" in "kube-system" namespace has status "Ready":"False"
	I0103 19:10:24.928114   56334 pod_ready.go:102] pod "coredns-66bff467f8-pslsp" in "kube-system" namespace has status "Ready":"False"
	I0103 19:10:25.428530   56334 pod_ready.go:92] pod "coredns-66bff467f8-pslsp" in "kube-system" namespace has status "Ready":"True"
	I0103 19:10:25.428556   56334 pod_ready.go:81] duration metric: took 11.506016246s waiting for pod "coredns-66bff467f8-pslsp" in "kube-system" namespace to be "Ready" ...
	I0103 19:10:25.428564   56334 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-547465" in "kube-system" namespace to be "Ready" ...
	I0103 19:10:25.432729   56334 pod_ready.go:92] pod "etcd-ingress-addon-legacy-547465" in "kube-system" namespace has status "Ready":"True"
	I0103 19:10:25.432750   56334 pod_ready.go:81] duration metric: took 4.180013ms waiting for pod "etcd-ingress-addon-legacy-547465" in "kube-system" namespace to be "Ready" ...
	I0103 19:10:25.432761   56334 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-547465" in "kube-system" namespace to be "Ready" ...
	I0103 19:10:25.436800   56334 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-547465" in "kube-system" namespace has status "Ready":"True"
	I0103 19:10:25.436821   56334 pod_ready.go:81] duration metric: took 4.051494ms waiting for pod "kube-apiserver-ingress-addon-legacy-547465" in "kube-system" namespace to be "Ready" ...
	I0103 19:10:25.436829   56334 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-547465" in "kube-system" namespace to be "Ready" ...
	I0103 19:10:25.440748   56334 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-547465" in "kube-system" namespace has status "Ready":"True"
	I0103 19:10:25.440769   56334 pod_ready.go:81] duration metric: took 3.934721ms waiting for pod "kube-controller-manager-ingress-addon-legacy-547465" in "kube-system" namespace to be "Ready" ...
	I0103 19:10:25.440781   56334 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d48x5" in "kube-system" namespace to be "Ready" ...
	I0103 19:10:25.444083   56334 pod_ready.go:92] pod "kube-proxy-d48x5" in "kube-system" namespace has status "Ready":"True"
	I0103 19:10:25.444101   56334 pod_ready.go:81] duration metric: took 3.313467ms waiting for pod "kube-proxy-d48x5" in "kube-system" namespace to be "Ready" ...
	I0103 19:10:25.444108   56334 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-547465" in "kube-system" namespace to be "Ready" ...
	I0103 19:10:25.624615   56334 request.go:629] Waited for 180.416398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-547465
	I0103 19:10:25.823628   56334 request.go:629] Waited for 196.279961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-547465
	I0103 19:10:25.826267   56334 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-547465" in "kube-system" namespace has status "Ready":"True"
	I0103 19:10:25.826289   56334 pod_ready.go:81] duration metric: took 382.17461ms waiting for pod "kube-scheduler-ingress-addon-legacy-547465" in "kube-system" namespace to be "Ready" ...
	I0103 19:10:25.826299   56334 pod_ready.go:38] duration metric: took 12.110418755s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:10:25.826319   56334 api_server.go:52] waiting for apiserver process to appear ...
	I0103 19:10:25.826387   56334 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 19:10:25.836609   56334 api_server.go:72] duration metric: took 16.260137457s to wait for apiserver process to appear ...
	I0103 19:10:25.836632   56334 api_server.go:88] waiting for apiserver healthz status ...
	I0103 19:10:25.836652   56334 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0103 19:10:25.841416   56334 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0103 19:10:25.842388   56334 api_server.go:141] control plane version: v1.18.20
	I0103 19:10:25.842411   56334 api_server.go:131] duration metric: took 5.773503ms to wait for apiserver health ...
	I0103 19:10:25.842422   56334 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 19:10:26.023766   56334 request.go:629] Waited for 181.262019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0103 19:10:26.029110   56334 system_pods.go:59] 8 kube-system pods found
	I0103 19:10:26.029136   56334 system_pods.go:61] "coredns-66bff467f8-pslsp" [14d5795d-b91e-4232-95af-62b0511d02d8] Running
	I0103 19:10:26.029141   56334 system_pods.go:61] "etcd-ingress-addon-legacy-547465" [0d60bcbd-4acb-4e4d-ba6a-2f6c7e59a92e] Running
	I0103 19:10:26.029145   56334 system_pods.go:61] "kindnet-7bpkw" [6e458ce0-711a-4f36-8a2b-32ef7b633e5b] Running
	I0103 19:10:26.029149   56334 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-547465" [fc9939fa-8b30-4c7f-998f-b2c1bd8a7d08] Running
	I0103 19:10:26.029153   56334 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-547465" [85df7877-b968-4897-afd1-2e58f240b4ac] Running
	I0103 19:10:26.029158   56334 system_pods.go:61] "kube-proxy-d48x5" [7d028fff-e60f-4e20-81ad-aac09a01b04f] Running
	I0103 19:10:26.029165   56334 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-547465" [04f5f29f-679f-48b7-b548-07b46ab39255] Running
	I0103 19:10:26.029171   56334 system_pods.go:61] "storage-provisioner" [6dee3b12-b68f-416f-b689-7d8998bbf71b] Running
	I0103 19:10:26.029177   56334 system_pods.go:74] duration metric: took 186.749619ms to wait for pod list to return data ...
	I0103 19:10:26.029187   56334 default_sa.go:34] waiting for default service account to be created ...
	I0103 19:10:26.224612   56334 request.go:629] Waited for 195.334939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0103 19:10:26.227102   56334 default_sa.go:45] found service account: "default"
	I0103 19:10:26.227127   56334 default_sa.go:55] duration metric: took 197.933897ms for default service account to be created ...
	I0103 19:10:26.227135   56334 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 19:10:26.424610   56334 request.go:629] Waited for 197.390379ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0103 19:10:26.429846   56334 system_pods.go:86] 8 kube-system pods found
	I0103 19:10:26.429876   56334 system_pods.go:89] "coredns-66bff467f8-pslsp" [14d5795d-b91e-4232-95af-62b0511d02d8] Running
	I0103 19:10:26.429884   56334 system_pods.go:89] "etcd-ingress-addon-legacy-547465" [0d60bcbd-4acb-4e4d-ba6a-2f6c7e59a92e] Running
	I0103 19:10:26.429889   56334 system_pods.go:89] "kindnet-7bpkw" [6e458ce0-711a-4f36-8a2b-32ef7b633e5b] Running
	I0103 19:10:26.429895   56334 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-547465" [fc9939fa-8b30-4c7f-998f-b2c1bd8a7d08] Running
	I0103 19:10:26.429901   56334 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-547465" [85df7877-b968-4897-afd1-2e58f240b4ac] Running
	I0103 19:10:26.429907   56334 system_pods.go:89] "kube-proxy-d48x5" [7d028fff-e60f-4e20-81ad-aac09a01b04f] Running
	I0103 19:10:26.429919   56334 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-547465" [04f5f29f-679f-48b7-b548-07b46ab39255] Running
	I0103 19:10:26.429929   56334 system_pods.go:89] "storage-provisioner" [6dee3b12-b68f-416f-b689-7d8998bbf71b] Running
	I0103 19:10:26.429943   56334 system_pods.go:126] duration metric: took 202.798077ms to wait for k8s-apps to be running ...
	I0103 19:10:26.429956   56334 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 19:10:26.430006   56334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:10:26.440535   56334 system_svc.go:56] duration metric: took 10.567204ms WaitForService to wait for kubelet.
	I0103 19:10:26.440562   56334 kubeadm.go:581] duration metric: took 16.864098151s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 19:10:26.440585   56334 node_conditions.go:102] verifying NodePressure condition ...
	I0103 19:10:26.624058   56334 request.go:629] Waited for 183.324049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0103 19:10:26.626916   56334 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0103 19:10:26.626940   56334 node_conditions.go:123] node cpu capacity is 8
	I0103 19:10:26.626952   56334 node_conditions.go:105] duration metric: took 186.361734ms to run NodePressure ...
	I0103 19:10:26.626991   56334 start.go:228] waiting for startup goroutines ...
	I0103 19:10:26.626997   56334 start.go:233] waiting for cluster config update ...
	I0103 19:10:26.627006   56334 start.go:242] writing updated cluster config ...
	I0103 19:10:26.627275   56334 ssh_runner.go:195] Run: rm -f paused
	I0103 19:10:26.672507   56334 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I0103 19:10:26.674969   56334 out.go:177] 
	W0103 19:10:26.676820   56334 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0103 19:10:26.678540   56334 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0103 19:10:26.680356   56334 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-547465" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 03 19:13:25 ingress-addon-legacy-547465 crio[961]: time="2024-01-03 19:13:25.251713265Z" level=info msg="Started container" PID=4876 containerID=52cdae4de313fd98a7e9176cc8608b1218daebba1fc7f806d132bfd75a4a5bcf description=default/hello-world-app-5f5d8b66bb-4w4sz/hello-world-app id=1774d0d3-e92d-4ed7-ba4e-8c652bac7549 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=6d9655075ec902cbcc6abd24c32335b4b5f779393db779f434cdd09391f31736
	Jan 03 19:13:28 ingress-addon-legacy-547465 crio[961]: time="2024-01-03 19:13:28.592323950Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=152f663c-f5f5-4d4f-808b-65225148c728 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jan 03 19:13:39 ingress-addon-legacy-547465 crio[961]: time="2024-01-03 19:13:39.593085782Z" level=info msg="Stopping pod sandbox: a6d256713ac4572ebf386a58026651e635ebd92a742280ccb319890833b8ba22" id=4de463c4-f359-4298-8d44-0006ce20d12e name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 03 19:13:39 ingress-addon-legacy-547465 crio[961]: time="2024-01-03 19:13:39.594117730Z" level=info msg="Stopped pod sandbox: a6d256713ac4572ebf386a58026651e635ebd92a742280ccb319890833b8ba22" id=4de463c4-f359-4298-8d44-0006ce20d12e name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 03 19:13:40 ingress-addon-legacy-547465 crio[961]: time="2024-01-03 19:13:40.010290942Z" level=info msg="Stopping pod sandbox: a6d256713ac4572ebf386a58026651e635ebd92a742280ccb319890833b8ba22" id=053b3a6b-4e6c-4ea5-a374-5fe24d309b2a name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 03 19:13:40 ingress-addon-legacy-547465 crio[961]: time="2024-01-03 19:13:40.010338747Z" level=info msg="Stopped pod sandbox (already stopped): a6d256713ac4572ebf386a58026651e635ebd92a742280ccb319890833b8ba22" id=053b3a6b-4e6c-4ea5-a374-5fe24d309b2a name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 03 19:13:40 ingress-addon-legacy-547465 crio[961]: time="2024-01-03 19:13:40.769352607Z" level=info msg="Stopping container: c29b340d9e37b3723cfa6e4c1db6a18aff51e245bb0f9baad4b22d5b86c2d0bf (timeout: 2s)" id=2985c238-9511-49b1-aac3-5cf70a71c12e name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 03 19:13:40 ingress-addon-legacy-547465 crio[961]: time="2024-01-03 19:13:40.771725494Z" level=info msg="Stopping container: c29b340d9e37b3723cfa6e4c1db6a18aff51e245bb0f9baad4b22d5b86c2d0bf (timeout: 2s)" id=b1863da1-5a51-4060-9156-b900e6cff815 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 03 19:13:41 ingress-addon-legacy-547465 crio[961]: time="2024-01-03 19:13:41.592001242Z" level=info msg="Stopping pod sandbox: a6d256713ac4572ebf386a58026651e635ebd92a742280ccb319890833b8ba22" id=6782f240-f6d5-4b84-977e-1f870edf57d3 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 03 19:13:41 ingress-addon-legacy-547465 crio[961]: time="2024-01-03 19:13:41.592055506Z" level=info msg="Stopped pod sandbox (already stopped): a6d256713ac4572ebf386a58026651e635ebd92a742280ccb319890833b8ba22" id=6782f240-f6d5-4b84-977e-1f870edf57d3 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 03 19:13:42 ingress-addon-legacy-547465 crio[961]: time="2024-01-03 19:13:42.777449350Z" level=warning msg="Stopping container c29b340d9e37b3723cfa6e4c1db6a18aff51e245bb0f9baad4b22d5b86c2d0bf with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=2985c238-9511-49b1-aac3-5cf70a71c12e name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 03 19:13:42 ingress-addon-legacy-547465 conmon[3413]: conmon c29b340d9e37b3723cfa <ninfo>: container 3425 exited with status 137
	Jan 03 19:13:42 ingress-addon-legacy-547465 crio[961]: time="2024-01-03 19:13:42.923036850Z" level=info msg="Stopped container c29b340d9e37b3723cfa6e4c1db6a18aff51e245bb0f9baad4b22d5b86c2d0bf: ingress-nginx/ingress-nginx-controller-7fcf777cb7-k8nst/controller" id=b1863da1-5a51-4060-9156-b900e6cff815 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 03 19:13:42 ingress-addon-legacy-547465 crio[961]: time="2024-01-03 19:13:42.923109120Z" level=info msg="Stopped container c29b340d9e37b3723cfa6e4c1db6a18aff51e245bb0f9baad4b22d5b86c2d0bf: ingress-nginx/ingress-nginx-controller-7fcf777cb7-k8nst/controller" id=2985c238-9511-49b1-aac3-5cf70a71c12e name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 03 19:13:42 ingress-addon-legacy-547465 crio[961]: time="2024-01-03 19:13:42.923720400Z" level=info msg="Stopping pod sandbox: a292a85fba8af8706b2d518c94538fb7879eb8edfd26e7dc17377faf21f91148" id=8423880c-3f2f-449f-a4b6-a14fc0ea5bda name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 03 19:13:42 ingress-addon-legacy-547465 crio[961]: time="2024-01-03 19:13:42.923722240Z" level=info msg="Stopping pod sandbox: a292a85fba8af8706b2d518c94538fb7879eb8edfd26e7dc17377faf21f91148" id=45dfbc94-7cac-480e-ba21-f99e42ebe3bb name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 03 19:13:42 ingress-addon-legacy-547465 crio[961]: time="2024-01-03 19:13:42.926483708Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-HTJF7XVCFNCFMNIZ - [0:0]\n:KUBE-HP-F6FUAG6HOJPCVV56 - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-F6FUAG6HOJPCVV56\n-X KUBE-HP-HTJF7XVCFNCFMNIZ\nCOMMIT\n"
	Jan 03 19:13:42 ingress-addon-legacy-547465 crio[961]: time="2024-01-03 19:13:42.927723736Z" level=info msg="Closing host port tcp:80"
	Jan 03 19:13:42 ingress-addon-legacy-547465 crio[961]: time="2024-01-03 19:13:42.927762390Z" level=info msg="Closing host port tcp:443"
	Jan 03 19:13:42 ingress-addon-legacy-547465 crio[961]: time="2024-01-03 19:13:42.928703774Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jan 03 19:13:42 ingress-addon-legacy-547465 crio[961]: time="2024-01-03 19:13:42.928724953Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jan 03 19:13:42 ingress-addon-legacy-547465 crio[961]: time="2024-01-03 19:13:42.928844371Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-k8nst Namespace:ingress-nginx ID:a292a85fba8af8706b2d518c94538fb7879eb8edfd26e7dc17377faf21f91148 UID:50ab10d7-9887-4aba-8a85-6b98fcf59c06 NetNS:/var/run/netns/336f6867-e9c0-4f8d-a605-efdc42ecc18b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 03 19:13:42 ingress-addon-legacy-547465 crio[961]: time="2024-01-03 19:13:42.928958299Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-k8nst from CNI network \"kindnet\" (type=ptp)"
	Jan 03 19:13:42 ingress-addon-legacy-547465 crio[961]: time="2024-01-03 19:13:42.967671366Z" level=info msg="Stopped pod sandbox: a292a85fba8af8706b2d518c94538fb7879eb8edfd26e7dc17377faf21f91148" id=8423880c-3f2f-449f-a4b6-a14fc0ea5bda name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 03 19:13:42 ingress-addon-legacy-547465 crio[961]: time="2024-01-03 19:13:42.967783806Z" level=info msg="Stopped pod sandbox (already stopped): a292a85fba8af8706b2d518c94538fb7879eb8edfd26e7dc17377faf21f91148" id=45dfbc94-7cac-480e-ba21-f99e42ebe3bb name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	52cdae4de313f       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            23 seconds ago      Running             hello-world-app           0                   6d9655075ec90       hello-world-app-5f5d8b66bb-4w4sz
	adfdb6d09a5f3       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                    2 minutes ago       Running             nginx                     0                   ba5a23520f0e4       nginx
	c29b340d9e37b       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   a292a85fba8af       ingress-nginx-controller-7fcf777cb7-k8nst
	9583ef3361781       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   ab765c2136173       ingress-nginx-admission-patch-tt9fm
	fca6f6c473f7a       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   2f24aa2e886b0       ingress-nginx-admission-create-44jz6
	4bf0d3a315c71       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   f2b83a73cafac       coredns-66bff467f8-pslsp
	f8a6fe5e6b793       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   123302ad0ac90       storage-provisioner
	d9f69a071d034       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   e64edb3cdea80       kindnet-7bpkw
	22a92c101a48e       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   10ed0e1794999       kube-proxy-d48x5
	1bfd906169ee4       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   939332cbe6108       kube-scheduler-ingress-addon-legacy-547465
	b86b04594b85f       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   c3be13e923f5c       kube-controller-manager-ingress-addon-legacy-547465
	5d9ca5a00b0c8       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   1fbf622648ccc       kube-apiserver-ingress-addon-legacy-547465
	ebd1653af2b5f       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   474f9df7592ae       etcd-ingress-addon-legacy-547465
	
	
	==> coredns [4bf0d3a315c71b43e37709d8f3b6d55b14ed3727072a792a1814a1c6e1672504] <==
	[INFO] 10.244.0.5:39437 - 18854 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002904784s
	[INFO] 10.244.0.5:58181 - 22686 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003808819s
	[INFO] 10.244.0.5:38908 - 24964 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003858028s
	[INFO] 10.244.0.5:37866 - 54019 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003957262s
	[INFO] 10.244.0.5:44954 - 58097 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003766767s
	[INFO] 10.244.0.5:40200 - 39378 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003834185s
	[INFO] 10.244.0.5:47735 - 40362 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004019551s
	[INFO] 10.244.0.5:56957 - 10034 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004180171s
	[INFO] 10.244.0.5:39437 - 22545 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004007332s
	[INFO] 10.244.0.5:47735 - 40436 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003462249s
	[INFO] 10.244.0.5:37866 - 16410 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003544122s
	[INFO] 10.244.0.5:56957 - 17905 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003370762s
	[INFO] 10.244.0.5:38908 - 41876 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003627692s
	[INFO] 10.244.0.5:39437 - 13292 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003533754s
	[INFO] 10.244.0.5:44954 - 1217 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003701685s
	[INFO] 10.244.0.5:40200 - 51360 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003544577s
	[INFO] 10.244.0.5:58181 - 45797 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003860097s
	[INFO] 10.244.0.5:47735 - 29789 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000102618s
	[INFO] 10.244.0.5:37866 - 2647 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000137991s
	[INFO] 10.244.0.5:39437 - 64795 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00006381s
	[INFO] 10.244.0.5:38908 - 3910 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000142309s
	[INFO] 10.244.0.5:40200 - 37791 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000152777s
	[INFO] 10.244.0.5:44954 - 59139 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000180648s
	[INFO] 10.244.0.5:58181 - 63696 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000215324s
	[INFO] 10.244.0.5:56957 - 11972 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000233373s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-547465
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-547465
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=ingress-addon-legacy-547465
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_03T19_09_53_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 19:09:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-547465
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jan 2024 19:13:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 19:11:23 +0000   Wed, 03 Jan 2024 19:09:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 19:11:23 +0000   Wed, 03 Jan 2024 19:09:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 19:11:23 +0000   Wed, 03 Jan 2024 19:09:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 19:11:23 +0000   Wed, 03 Jan 2024 19:10:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-547465
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 33200c994dc347849688dae4074431ff
	  System UUID:                416666a4-372a-469e-bf63-cab409ff4f60
	  Boot ID:                    b5a86fc9-be37-4e1f-bbe9-b1739322b77c
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-4w4sz                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 coredns-66bff467f8-pslsp                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m40s
	  kube-system                 etcd-ingress-addon-legacy-547465                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kindnet-7bpkw                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m40s
	  kube-system                 kube-apiserver-ingress-addon-legacy-547465             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-547465    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-proxy-d48x5                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 kube-scheduler-ingress-addon-legacy-547465             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 4m3s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m2s (x5 over 4m3s)  kubelet     Node ingress-addon-legacy-547465 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m2s (x4 over 4m3s)  kubelet     Node ingress-addon-legacy-547465 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m2s (x4 over 4m3s)  kubelet     Node ingress-addon-legacy-547465 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m55s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m55s                kubelet     Node ingress-addon-legacy-547465 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m55s                kubelet     Node ingress-addon-legacy-547465 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m55s                kubelet     Node ingress-addon-legacy-547465 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m40s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m35s                kubelet     Node ingress-addon-legacy-547465 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.005068] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.007977] FS-Cache: N-cookie d=000000004745ad78{9p.inode} n=0000000020f84279
	[  +0.008772] FS-Cache: N-key=[8] '8ba00f0200000000'
	[  +0.279221] FS-Cache: Duplicate cookie detected
	[  +0.004677] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006751] FS-Cache: O-cookie d=000000004745ad78{9p.inode} n=00000000084ca310
	[  +0.007401] FS-Cache: O-key=[8] '98a00f0200000000'
	[  +0.004937] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006581] FS-Cache: N-cookie d=000000004745ad78{9p.inode} n=00000000e0790b00
	[  +0.008725] FS-Cache: N-key=[8] '98a00f0200000000'
	[Jan 3 19:09] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan 3 19:11] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 26 61 f8 2e aa 36 e6 16 11 d2 10 6b 08 00
	[  +1.016130] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 26 61 f8 2e aa 36 e6 16 11 d2 10 6b 08 00
	[  +2.015807] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 61 f8 2e aa 36 e6 16 11 d2 10 6b 08 00
	[  +4.127685] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 61 f8 2e aa 36 e6 16 11 d2 10 6b 08 00
	[  +8.191385] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 61 f8 2e aa 36 e6 16 11 d2 10 6b 08 00
	[ +16.126814] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 61 f8 2e aa 36 e6 16 11 d2 10 6b 08 00
	[Jan 3 19:12] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 26 61 f8 2e aa 36 e6 16 11 d2 10 6b 08 00
	
	
	==> etcd [ebd1653af2b5f4840d4653325412f82f466b6cb11749210d0f153b69653b856b] <==
	2024-01-03 19:09:46.793236 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/01/03 19:09:46 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-03 19:09:46.793686 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2024-01-03 19:09:46.802565 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-03 19:09:46.802725 I | embed: listening for peers on 192.168.49.2:2380
	2024-01-03 19:09:46.802760 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/01/03 19:09:47 INFO: aec36adc501070cc is starting a new election at term 1
	raft2024/01/03 19:09:47 INFO: aec36adc501070cc became candidate at term 2
	raft2024/01/03 19:09:47 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2024/01/03 19:09:47 INFO: aec36adc501070cc became leader at term 2
	raft2024/01/03 19:09:47 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2024-01-03 19:09:47.284773 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-03 19:09:47.285754 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-03 19:09:47.285820 I | etcdserver/api: enabled capabilities for version 3.4
	2024-01-03 19:09:47.285842 I | embed: ready to serve client requests
	2024-01-03 19:09:47.285862 I | etcdserver: published {Name:ingress-addon-legacy-547465 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2024-01-03 19:09:47.285874 I | embed: ready to serve client requests
	2024-01-03 19:09:47.288039 I | embed: serving client requests on 192.168.49.2:2379
	2024-01-03 19:09:47.288136 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-03 19:10:13.915402 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-pslsp\" " with result "range_response_count:1 size:3753" took too long (198.621633ms) to execute
	2024-01-03 19:10:13.915553 W | etcdserver: request "header:<ID:8128026255620809706 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/ingress-addon-legacy-547465\" mod_revision:413 > success:<request_put:<key:\"/registry/minions/ingress-addon-legacy-547465\" value_size:6323 >> failure:<request_range:<key:\"/registry/minions/ingress-addon-legacy-547465\" > >>" with result "size:16" took too long (138.472982ms) to execute
	2024-01-03 19:10:13.915767 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:8 size:37425" took too long (198.938379ms) to execute
	2024-01-03 19:10:14.117007 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-pslsp\" " with result "range_response_count:1 size:3753" took too long (193.568661ms) to execute
	2024-01-03 19:10:14.117229 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/storage-provisioner\" " with result "range_response_count:1 size:2695" took too long (112.181765ms) to execute
	2024-01-03 19:10:15.441885 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1107" took too long (126.226039ms) to execute
	
	
	==> kernel <==
	 19:13:48 up 56 min,  0 users,  load average: 0.20, 0.49, 0.41
	Linux ingress-addon-legacy-547465 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [d9f69a071d034274582602116df4303678bfdc078f998812fdd5466402213af2] <==
	I0103 19:11:43.560242       1 main.go:227] handling current node
	I0103 19:11:53.563281       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:11:53.563302       1 main.go:227] handling current node
	I0103 19:12:03.575103       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:12:03.575125       1 main.go:227] handling current node
	I0103 19:12:13.578902       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:12:13.578927       1 main.go:227] handling current node
	I0103 19:12:23.589136       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:12:23.589157       1 main.go:227] handling current node
	I0103 19:12:33.592464       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:12:33.592495       1 main.go:227] handling current node
	I0103 19:12:43.602154       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:12:43.602182       1 main.go:227] handling current node
	I0103 19:12:53.605262       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:12:53.605285       1 main.go:227] handling current node
	I0103 19:13:03.617115       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:13:03.617144       1 main.go:227] handling current node
	I0103 19:13:13.621101       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:13:13.621127       1 main.go:227] handling current node
	I0103 19:13:23.633215       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:13:23.633241       1 main.go:227] handling current node
	I0103 19:13:33.637345       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:13:33.637369       1 main.go:227] handling current node
	I0103 19:13:43.648758       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:13:43.648782       1 main.go:227] handling current node
	
	
	==> kube-apiserver [5d9ca5a00b0c89f715c047b0628d93fec1db4cc03899cbd0ecca0206b5b48566] <==
	I0103 19:09:50.468711       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E0103 19:09:50.472595       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0103 19:09:50.568440       1 cache.go:39] Caches are synced for autoregister controller
	I0103 19:09:50.574393       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0103 19:09:50.574470       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0103 19:09:50.576293       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0103 19:09:50.576340       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0103 19:09:51.466966       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0103 19:09:51.466995       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0103 19:09:51.471576       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0103 19:09:51.474398       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0103 19:09:51.474418       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0103 19:09:51.744197       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0103 19:09:51.772838       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0103 19:09:51.912059       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0103 19:09:51.913028       1 controller.go:609] quota admission added evaluator for: endpoints
	I0103 19:09:51.916141       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0103 19:09:52.758210       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0103 19:09:53.196156       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0103 19:09:53.436881       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0103 19:09:53.564918       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0103 19:10:08.208010       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0103 19:10:08.379831       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0103 19:10:27.382518       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0103 19:11:00.279927       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	
	==> kube-controller-manager [b86b04594b85f145d38daf2585e043bf8b0f3b5376a2cd66dd7ccbd6864b2d6f] <==
	I0103 19:10:08.594854       1 shared_informer.go:230] Caches are synced for taint 
	I0103 19:10:08.594958       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	I0103 19:10:08.594994       1 taint_manager.go:187] Starting NoExecuteTaintManager
	W0103 19:10:08.595058       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-547465. Assuming now as a timestamp.
	I0103 19:10:08.595107       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0103 19:10:08.595153       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-547465", UID:"e2c5a28e-b669-415b-89af-30ef81750b23", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-547465 event: Registered Node ingress-addon-legacy-547465 in Controller
	I0103 19:10:08.612351       1 shared_informer.go:230] Caches are synced for resource quota 
	I0103 19:10:08.616132       1 shared_informer.go:230] Caches are synced for namespace 
	I0103 19:10:08.618420       1 shared_informer.go:230] Caches are synced for service account 
	I0103 19:10:08.675215       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0103 19:10:08.713861       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0103 19:10:08.762213       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0103 19:10:08.762236       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0103 19:10:09.088808       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"2b6b0076-c8dc-484a-a829-0bce2fd389ae", APIVersion:"apps/v1", ResourceVersion:"377", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0103 19:10:09.100963       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"7ca0d968-ffa0-4a01-ba7d-68f6688938d7", APIVersion:"apps/v1", ResourceVersion:"378", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-rd7rg
	I0103 19:10:18.595606       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0103 19:10:27.346601       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"c907f7f2-55db-4bd6-8d44-9f32bcb351c0", APIVersion:"apps/v1", ResourceVersion:"479", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0103 19:10:27.385375       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"da50b438-2a15-4e01-acc7-2e3751aff24c", APIVersion:"apps/v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-k8nst
	I0103 19:10:27.402709       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"84baca07-dbd9-4874-8295-39cf22f821d8", APIVersion:"batch/v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-44jz6
	I0103 19:10:27.495910       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"2cf70a80-5cd8-4502-a4a6-6a01171bfc54", APIVersion:"batch/v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-tt9fm
	I0103 19:10:32.687383       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"84baca07-dbd9-4874-8295-39cf22f821d8", APIVersion:"batch/v1", ResourceVersion:"498", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0103 19:10:33.690173       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"2cf70a80-5cd8-4502-a4a6-6a01171bfc54", APIVersion:"batch/v1", ResourceVersion:"506", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0103 19:13:22.173558       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"dcba36ba-5f06-4508-934a-aa40d9be86a0", APIVersion:"apps/v1", ResourceVersion:"728", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0103 19:13:22.178196       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"1e9cc3b9-be96-46c6-9d58-aeb42528b726", APIVersion:"apps/v1", ResourceVersion:"729", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-4w4sz
	E0103 19:13:45.465525       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-mqtjz" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	
	==> kube-proxy [22a92c101a48e6f0e8da0bba67c3228145f67ed18d87523e6762593fbb5cb66d] <==
	W0103 19:10:08.860111       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0103 19:10:08.867055       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0103 19:10:08.867086       1 server_others.go:186] Using iptables Proxier.
	I0103 19:10:08.867354       1 server.go:583] Version: v1.18.20
	I0103 19:10:08.867745       1 config.go:315] Starting service config controller
	I0103 19:10:08.867764       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0103 19:10:08.867908       1 config.go:133] Starting endpoints config controller
	I0103 19:10:08.867935       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0103 19:10:08.967981       1 shared_informer.go:230] Caches are synced for service config 
	I0103 19:10:08.968059       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [1bfd906169ee405388e395ed3a8e7e537072b6751963b2f66bb78636fe879619] <==
	I0103 19:09:50.577718       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0103 19:09:50.577808       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0103 19:09:50.580710       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0103 19:09:50.580816       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0103 19:09:50.581320       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0103 19:09:50.580835       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0103 19:09:50.585110       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0103 19:09:50.585274       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0103 19:09:50.585572       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0103 19:09:50.586544       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0103 19:09:50.586569       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0103 19:09:50.586589       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0103 19:09:50.586653       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0103 19:09:50.586718       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0103 19:09:50.586757       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0103 19:09:50.586860       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0103 19:09:50.586871       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0103 19:09:50.587046       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0103 19:09:51.410470       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0103 19:09:51.439923       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0103 19:09:51.465696       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0103 19:09:51.507006       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0103 19:09:51.532471       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0103 19:09:51.609100       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0103 19:09:54.381495       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Jan 03 19:13:03 ingress-addon-legacy-547465 kubelet[1853]: E0103 19:13:03.593044    1853 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 03 19:13:03 ingress-addon-legacy-547465 kubelet[1853]: E0103 19:13:03.593077    1853 pod_workers.go:191] Error syncing pod 613fb24f-1cf8-4e8a-840b-56ae1cde3f92 ("kube-ingress-dns-minikube_kube-system(613fb24f-1cf8-4e8a-840b-56ae1cde3f92)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jan 03 19:13:17 ingress-addon-legacy-547465 kubelet[1853]: E0103 19:13:17.592746    1853 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 03 19:13:17 ingress-addon-legacy-547465 kubelet[1853]: E0103 19:13:17.592791    1853 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 03 19:13:17 ingress-addon-legacy-547465 kubelet[1853]: E0103 19:13:17.592843    1853 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 03 19:13:17 ingress-addon-legacy-547465 kubelet[1853]: E0103 19:13:17.592878    1853 pod_workers.go:191] Error syncing pod 613fb24f-1cf8-4e8a-840b-56ae1cde3f92 ("kube-ingress-dns-minikube_kube-system(613fb24f-1cf8-4e8a-840b-56ae1cde3f92)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jan 03 19:13:22 ingress-addon-legacy-547465 kubelet[1853]: I0103 19:13:22.183678    1853 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 03 19:13:22 ingress-addon-legacy-547465 kubelet[1853]: I0103 19:13:22.303060    1853 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-hrqgn" (UniqueName: "kubernetes.io/secret/3b3739a7-f4e3-4374-9931-9fb3470d5181-default-token-hrqgn") pod "hello-world-app-5f5d8b66bb-4w4sz" (UID: "3b3739a7-f4e3-4374-9931-9fb3470d5181")
	Jan 03 19:13:22 ingress-addon-legacy-547465 kubelet[1853]: W0103 19:13:22.543221    1853 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/b8ff93692e8816d442112391a021bdcb874187fdac7a10e6facf54db1f78bb35/crio-6d9655075ec902cbcc6abd24c32335b4b5f779393db779f434cdd09391f31736 WatchSource:0}: Error finding container 6d9655075ec902cbcc6abd24c32335b4b5f779393db779f434cdd09391f31736: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000afa4c0 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Jan 03 19:13:28 ingress-addon-legacy-547465 kubelet[1853]: E0103 19:13:28.592680    1853 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 03 19:13:28 ingress-addon-legacy-547465 kubelet[1853]: E0103 19:13:28.592724    1853 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 03 19:13:28 ingress-addon-legacy-547465 kubelet[1853]: E0103 19:13:28.592771    1853 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 03 19:13:28 ingress-addon-legacy-547465 kubelet[1853]: E0103 19:13:28.592809    1853 pod_workers.go:191] Error syncing pod 613fb24f-1cf8-4e8a-840b-56ae1cde3f92 ("kube-ingress-dns-minikube_kube-system(613fb24f-1cf8-4e8a-840b-56ae1cde3f92)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jan 03 19:13:38 ingress-addon-legacy-547465 kubelet[1853]: I0103 19:13:38.041854    1853 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-g8hdh" (UniqueName: "kubernetes.io/secret/613fb24f-1cf8-4e8a-840b-56ae1cde3f92-minikube-ingress-dns-token-g8hdh") pod "613fb24f-1cf8-4e8a-840b-56ae1cde3f92" (UID: "613fb24f-1cf8-4e8a-840b-56ae1cde3f92")
	Jan 03 19:13:38 ingress-addon-legacy-547465 kubelet[1853]: I0103 19:13:38.043754    1853 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/613fb24f-1cf8-4e8a-840b-56ae1cde3f92-minikube-ingress-dns-token-g8hdh" (OuterVolumeSpecName: "minikube-ingress-dns-token-g8hdh") pod "613fb24f-1cf8-4e8a-840b-56ae1cde3f92" (UID: "613fb24f-1cf8-4e8a-840b-56ae1cde3f92"). InnerVolumeSpecName "minikube-ingress-dns-token-g8hdh". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 03 19:13:38 ingress-addon-legacy-547465 kubelet[1853]: I0103 19:13:38.142188    1853 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-g8hdh" (UniqueName: "kubernetes.io/secret/613fb24f-1cf8-4e8a-840b-56ae1cde3f92-minikube-ingress-dns-token-g8hdh") on node "ingress-addon-legacy-547465" DevicePath ""
	Jan 03 19:13:40 ingress-addon-legacy-547465 kubelet[1853]: E0103 19:13:40.770092    1853 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-k8nst.17a6ec34db57b3ce", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-k8nst", UID:"50ab10d7-9887-4aba-8a85-6b98fcf59c06", APIVersion:"v1", ResourceVersion:"487", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-547465"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15d8a592dd1cbce, ext:227615732373, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15d8a592dd1cbce, ext:227615732373, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-k8nst.17a6ec34db57b3ce" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 03 19:13:40 ingress-addon-legacy-547465 kubelet[1853]: E0103 19:13:40.774160    1853 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-k8nst.17a6ec34db57b3ce", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-k8nst", UID:"50ab10d7-9887-4aba-8a85-6b98fcf59c06", APIVersion:"v1", ResourceVersion:"487", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-547465"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15d8a592dd1cbce, ext:227615732373, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15d8a592df87927, ext:227618267118, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-k8nst.17a6ec34db57b3ce" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 03 19:13:43 ingress-addon-legacy-547465 kubelet[1853]: W0103 19:13:43.004042    1853 pod_container_deletor.go:77] Container "a292a85fba8af8706b2d518c94538fb7879eb8edfd26e7dc17377faf21f91148" not found in pod's containers
	Jan 03 19:13:44 ingress-addon-legacy-547465 kubelet[1853]: I0103 19:13:44.884813    1853 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/50ab10d7-9887-4aba-8a85-6b98fcf59c06-webhook-cert") pod "50ab10d7-9887-4aba-8a85-6b98fcf59c06" (UID: "50ab10d7-9887-4aba-8a85-6b98fcf59c06")
	Jan 03 19:13:44 ingress-addon-legacy-547465 kubelet[1853]: I0103 19:13:44.884876    1853 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-vdxp8" (UniqueName: "kubernetes.io/secret/50ab10d7-9887-4aba-8a85-6b98fcf59c06-ingress-nginx-token-vdxp8") pod "50ab10d7-9887-4aba-8a85-6b98fcf59c06" (UID: "50ab10d7-9887-4aba-8a85-6b98fcf59c06")
	Jan 03 19:13:44 ingress-addon-legacy-547465 kubelet[1853]: I0103 19:13:44.887182    1853 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ab10d7-9887-4aba-8a85-6b98fcf59c06-ingress-nginx-token-vdxp8" (OuterVolumeSpecName: "ingress-nginx-token-vdxp8") pod "50ab10d7-9887-4aba-8a85-6b98fcf59c06" (UID: "50ab10d7-9887-4aba-8a85-6b98fcf59c06"). InnerVolumeSpecName "ingress-nginx-token-vdxp8". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 03 19:13:44 ingress-addon-legacy-547465 kubelet[1853]: I0103 19:13:44.887356    1853 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ab10d7-9887-4aba-8a85-6b98fcf59c06-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "50ab10d7-9887-4aba-8a85-6b98fcf59c06" (UID: "50ab10d7-9887-4aba-8a85-6b98fcf59c06"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 03 19:13:44 ingress-addon-legacy-547465 kubelet[1853]: I0103 19:13:44.985170    1853 reconciler.go:319] Volume detached for volume "ingress-nginx-token-vdxp8" (UniqueName: "kubernetes.io/secret/50ab10d7-9887-4aba-8a85-6b98fcf59c06-ingress-nginx-token-vdxp8") on node "ingress-addon-legacy-547465" DevicePath ""
	Jan 03 19:13:44 ingress-addon-legacy-547465 kubelet[1853]: I0103 19:13:44.985209    1853 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/50ab10d7-9887-4aba-8a85-6b98fcf59c06-webhook-cert") on node "ingress-addon-legacy-547465" DevicePath ""
	
	
	==> storage-provisioner [f8a6fe5e6b7938507b5fcf5267966dfb50b44d0b406356393556e2fc5c5182c8] <==
	I0103 19:10:15.176550       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0103 19:10:15.183964       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0103 19:10:15.184026       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0103 19:10:15.309732       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0103 19:10:15.309792       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5f89ed3a-88cf-4faf-b856-f374f058e79d", APIVersion:"v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-547465_a1831d24-90f7-409b-a1ea-49c3e96dc175 became leader
	I0103 19:10:15.309864       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-547465_a1831d24-90f7-409b-a1ea-49c3e96dc175!
	I0103 19:10:15.410121       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-547465_a1831d24-90f7-409b-a1ea-49c3e96dc175!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-547465 -n ingress-addon-legacy-547465
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-547465 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (187.51s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-867906 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-867906 -- exec busybox-5bc68d56bd-8j67l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-867906 -- exec busybox-5bc68d56bd-8j67l -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-867906 -- exec busybox-5bc68d56bd-8j67l -- sh -c "ping -c 1 192.168.58.1": exit status 1 (170.277522ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-8j67l): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-867906 -- exec busybox-5bc68d56bd-nkg7x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-867906 -- exec busybox-5bc68d56bd-nkg7x -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-867906 -- exec busybox-5bc68d56bd-nkg7x -- sh -c "ping -c 1 192.168.58.1": exit status 1 (182.873757ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-nkg7x): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-867906
helpers_test.go:235: (dbg) docker inspect multinode-867906:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "16b24a361d347b18a39cdf8457d9f98ce75e719a544b5914bcfae22cab236bb4",
	        "Created": "2024-01-03T19:19:00.136921902Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 103453,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-03T19:19:00.419581952Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b531f61d9bfacb74e25e3fb20a2533b78fa4bf98ca9061755006074ff8c2c789",
	        "ResolvConfPath": "/var/lib/docker/containers/16b24a361d347b18a39cdf8457d9f98ce75e719a544b5914bcfae22cab236bb4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/16b24a361d347b18a39cdf8457d9f98ce75e719a544b5914bcfae22cab236bb4/hostname",
	        "HostsPath": "/var/lib/docker/containers/16b24a361d347b18a39cdf8457d9f98ce75e719a544b5914bcfae22cab236bb4/hosts",
	        "LogPath": "/var/lib/docker/containers/16b24a361d347b18a39cdf8457d9f98ce75e719a544b5914bcfae22cab236bb4/16b24a361d347b18a39cdf8457d9f98ce75e719a544b5914bcfae22cab236bb4-json.log",
	        "Name": "/multinode-867906",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-867906:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-867906",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/00235fa0f386573f8c025aa85bde725ffdbaac45c9d0d16ce8f0b37ce6e38a87-init/diff:/var/lib/docker/overlay2/a5364ccac14714ee0f769c339926d51ad0bbde3642ccbcf0e3661d2982bd002b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/00235fa0f386573f8c025aa85bde725ffdbaac45c9d0d16ce8f0b37ce6e38a87/merged",
	                "UpperDir": "/var/lib/docker/overlay2/00235fa0f386573f8c025aa85bde725ffdbaac45c9d0d16ce8f0b37ce6e38a87/diff",
	                "WorkDir": "/var/lib/docker/overlay2/00235fa0f386573f8c025aa85bde725ffdbaac45c9d0d16ce8f0b37ce6e38a87/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-867906",
	                "Source": "/var/lib/docker/volumes/multinode-867906/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-867906",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-867906",
	                "name.minikube.sigs.k8s.io": "multinode-867906",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2535803813d5ebf0c34facfd37539c745ebfa1b08b9e0982a3e85a58790c6d45",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32844"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2535803813d5",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-867906": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "16b24a361d34",
	                        "multinode-867906"
	                    ],
	                    "NetworkID": "5f1630ec59163560381905fade2fd40faa76617659bf051deebaa82561873903",
	                    "EndpointID": "bb96a89d6d427a1769b9108f98d89a0a8cc15e127d3bfc0aeb081bbc7c8d9de2",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-867906 -n multinode-867906
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-867906 logs -n 25: (1.297116441s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-668281                           | mount-start-2-668281 | jenkins | v1.32.0 | 03 Jan 24 19:18 UTC | 03 Jan 24 19:18 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-668281 ssh -- ls                    | mount-start-2-668281 | jenkins | v1.32.0 | 03 Jan 24 19:18 UTC | 03 Jan 24 19:18 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-657797                           | mount-start-1-657797 | jenkins | v1.32.0 | 03 Jan 24 19:18 UTC | 03 Jan 24 19:18 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-668281 ssh -- ls                    | mount-start-2-668281 | jenkins | v1.32.0 | 03 Jan 24 19:18 UTC | 03 Jan 24 19:18 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-668281                           | mount-start-2-668281 | jenkins | v1.32.0 | 03 Jan 24 19:18 UTC | 03 Jan 24 19:18 UTC |
	| start   | -p mount-start-2-668281                           | mount-start-2-668281 | jenkins | v1.32.0 | 03 Jan 24 19:18 UTC | 03 Jan 24 19:18 UTC |
	| ssh     | mount-start-2-668281 ssh -- ls                    | mount-start-2-668281 | jenkins | v1.32.0 | 03 Jan 24 19:18 UTC | 03 Jan 24 19:18 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-668281                           | mount-start-2-668281 | jenkins | v1.32.0 | 03 Jan 24 19:18 UTC | 03 Jan 24 19:18 UTC |
	| delete  | -p mount-start-1-657797                           | mount-start-1-657797 | jenkins | v1.32.0 | 03 Jan 24 19:18 UTC | 03 Jan 24 19:18 UTC |
	| start   | -p multinode-867906                               | multinode-867906     | jenkins | v1.32.0 | 03 Jan 24 19:18 UTC | 03 Jan 24 19:20 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-867906 -- apply -f                   | multinode-867906     | jenkins | v1.32.0 | 03 Jan 24 19:20 UTC | 03 Jan 24 19:20 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-867906 -- rollout                    | multinode-867906     | jenkins | v1.32.0 | 03 Jan 24 19:20 UTC | 03 Jan 24 19:20 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-867906 -- get pods -o                | multinode-867906     | jenkins | v1.32.0 | 03 Jan 24 19:20 UTC | 03 Jan 24 19:20 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-867906 -- get pods -o                | multinode-867906     | jenkins | v1.32.0 | 03 Jan 24 19:20 UTC | 03 Jan 24 19:20 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-867906 -- exec                       | multinode-867906     | jenkins | v1.32.0 | 03 Jan 24 19:20 UTC | 03 Jan 24 19:20 UTC |
	|         | busybox-5bc68d56bd-8j67l --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-867906 -- exec                       | multinode-867906     | jenkins | v1.32.0 | 03 Jan 24 19:20 UTC | 03 Jan 24 19:20 UTC |
	|         | busybox-5bc68d56bd-nkg7x --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-867906 -- exec                       | multinode-867906     | jenkins | v1.32.0 | 03 Jan 24 19:20 UTC | 03 Jan 24 19:20 UTC |
	|         | busybox-5bc68d56bd-8j67l --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-867906 -- exec                       | multinode-867906     | jenkins | v1.32.0 | 03 Jan 24 19:20 UTC | 03 Jan 24 19:20 UTC |
	|         | busybox-5bc68d56bd-nkg7x --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-867906 -- exec                       | multinode-867906     | jenkins | v1.32.0 | 03 Jan 24 19:20 UTC | 03 Jan 24 19:20 UTC |
	|         | busybox-5bc68d56bd-8j67l -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-867906 -- exec                       | multinode-867906     | jenkins | v1.32.0 | 03 Jan 24 19:20 UTC | 03 Jan 24 19:20 UTC |
	|         | busybox-5bc68d56bd-nkg7x -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-867906 -- get pods -o                | multinode-867906     | jenkins | v1.32.0 | 03 Jan 24 19:20 UTC | 03 Jan 24 19:20 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-867906 -- exec                       | multinode-867906     | jenkins | v1.32.0 | 03 Jan 24 19:20 UTC | 03 Jan 24 19:20 UTC |
	|         | busybox-5bc68d56bd-8j67l                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-867906 -- exec                       | multinode-867906     | jenkins | v1.32.0 | 03 Jan 24 19:20 UTC |                     |
	|         | busybox-5bc68d56bd-8j67l -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-867906 -- exec                       | multinode-867906     | jenkins | v1.32.0 | 03 Jan 24 19:20 UTC | 03 Jan 24 19:20 UTC |
	|         | busybox-5bc68d56bd-nkg7x                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-867906 -- exec                       | multinode-867906     | jenkins | v1.32.0 | 03 Jan 24 19:20 UTC |                     |
	|         | busybox-5bc68d56bd-nkg7x -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 19:18:54
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 19:18:54.046287  102835 out.go:296] Setting OutFile to fd 1 ...
	I0103 19:18:54.046562  102835 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:18:54.046571  102835 out.go:309] Setting ErrFile to fd 2...
	I0103 19:18:54.046575  102835 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:18:54.046762  102835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-8915/.minikube/bin
	I0103 19:18:54.047331  102835 out.go:303] Setting JSON to false
	I0103 19:18:54.048665  102835 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3680,"bootTime":1704305854,"procs":835,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 19:18:54.048729  102835 start.go:138] virtualization: kvm guest
	I0103 19:18:54.051076  102835 out.go:177] * [multinode-867906] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 19:18:54.052751  102835 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 19:18:54.052755  102835 notify.go:220] Checking for updates...
	I0103 19:18:54.055620  102835 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 19:18:54.057267  102835 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-8915/kubeconfig
	I0103 19:18:54.058953  102835 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-8915/.minikube
	I0103 19:18:54.060391  102835 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 19:18:54.061823  102835 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 19:18:54.064612  102835 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 19:18:54.086308  102835 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 19:18:54.086407  102835 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 19:18:54.136139  102835 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-03 19:18:54.12772738 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0103 19:18:54.136232  102835 docker.go:295] overlay module found
	I0103 19:18:54.138343  102835 out.go:177] * Using the docker driver based on user configuration
	I0103 19:18:54.139832  102835 start.go:298] selected driver: docker
	I0103 19:18:54.139859  102835 start.go:902] validating driver "docker" against <nil>
	I0103 19:18:54.139869  102835 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 19:18:54.140597  102835 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 19:18:54.195993  102835 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-03 19:18:54.187544202 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0103 19:18:54.196202  102835 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0103 19:18:54.196526  102835 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0103 19:18:54.198668  102835 out.go:177] * Using Docker driver with root privileges
	I0103 19:18:54.200030  102835 cni.go:84] Creating CNI manager for ""
	I0103 19:18:54.200059  102835 cni.go:136] 0 nodes found, recommending kindnet
	I0103 19:18:54.200071  102835 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0103 19:18:54.200088  102835 start_flags.go:323] config:
	{Name:multinode-867906 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-867906 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:18:54.201916  102835 out.go:177] * Starting control plane node multinode-867906 in cluster multinode-867906
	I0103 19:18:54.203827  102835 cache.go:121] Beginning downloading kic base image for docker with crio
	I0103 19:18:54.205486  102835 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0103 19:18:54.206760  102835 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 19:18:54.206800  102835 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17885-8915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0103 19:18:54.206809  102835 cache.go:56] Caching tarball of preloaded images
	I0103 19:18:54.206846  102835 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 19:18:54.206873  102835 preload.go:174] Found /home/jenkins/minikube-integration/17885-8915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0103 19:18:54.206883  102835 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0103 19:18:54.207210  102835 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/config.json ...
	I0103 19:18:54.207239  102835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/config.json: {Name:mk8286b94eb2316aa304cdd799e9dda215024023 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:18:54.222796  102835 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0103 19:18:54.222818  102835 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0103 19:18:54.222834  102835 cache.go:194] Successfully downloaded all kic artifacts
	I0103 19:18:54.222875  102835 start.go:365] acquiring machines lock for multinode-867906: {Name:mkdeb674318004010c834287d71b45a4d231321a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:18:54.222972  102835 start.go:369] acquired machines lock for "multinode-867906" in 76.731µs
	I0103 19:18:54.223000  102835 start.go:93] Provisioning new machine with config: &{Name:multinode-867906 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-867906 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 19:18:54.223083  102835 start.go:125] createHost starting for "" (driver="docker")
	I0103 19:18:54.225312  102835 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0103 19:18:54.225516  102835 start.go:159] libmachine.API.Create for "multinode-867906" (driver="docker")
	I0103 19:18:54.225540  102835 client.go:168] LocalClient.Create starting
	I0103 19:18:54.225601  102835 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem
	I0103 19:18:54.225631  102835 main.go:141] libmachine: Decoding PEM data...
	I0103 19:18:54.225646  102835 main.go:141] libmachine: Parsing certificate...
	I0103 19:18:54.225697  102835 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-8915/.minikube/certs/cert.pem
	I0103 19:18:54.225714  102835 main.go:141] libmachine: Decoding PEM data...
	I0103 19:18:54.225723  102835 main.go:141] libmachine: Parsing certificate...
	I0103 19:18:54.226051  102835 cli_runner.go:164] Run: docker network inspect multinode-867906 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0103 19:18:54.241727  102835 cli_runner.go:211] docker network inspect multinode-867906 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0103 19:18:54.241815  102835 network_create.go:281] running [docker network inspect multinode-867906] to gather additional debugging logs...
	I0103 19:18:54.241844  102835 cli_runner.go:164] Run: docker network inspect multinode-867906
	W0103 19:18:54.257668  102835 cli_runner.go:211] docker network inspect multinode-867906 returned with exit code 1
	I0103 19:18:54.257699  102835 network_create.go:284] error running [docker network inspect multinode-867906]: docker network inspect multinode-867906: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-867906 not found
	I0103 19:18:54.257709  102835 network_create.go:286] output of [docker network inspect multinode-867906]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-867906 not found
	
	** /stderr **
	I0103 19:18:54.257822  102835 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0103 19:18:54.273372  102835 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-331b7f9d2466 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:21:5e:97:7b} reservation:<nil>}
	I0103 19:18:54.273776  102835 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000647a80}
	I0103 19:18:54.273799  102835 network_create.go:124] attempt to create docker network multinode-867906 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0103 19:18:54.273840  102835 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-867906 multinode-867906
	I0103 19:18:54.327715  102835 network_create.go:108] docker network multinode-867906 192.168.58.0/24 created
	I0103 19:18:54.327747  102835 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-867906" container
	I0103 19:18:54.327805  102835 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0103 19:18:54.343159  102835 cli_runner.go:164] Run: docker volume create multinode-867906 --label name.minikube.sigs.k8s.io=multinode-867906 --label created_by.minikube.sigs.k8s.io=true
	I0103 19:18:54.359841  102835 oci.go:103] Successfully created a docker volume multinode-867906
	I0103 19:18:54.359920  102835 cli_runner.go:164] Run: docker run --rm --name multinode-867906-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-867906 --entrypoint /usr/bin/test -v multinode-867906:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I0103 19:18:54.899055  102835 oci.go:107] Successfully prepared a docker volume multinode-867906
	I0103 19:18:54.899101  102835 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 19:18:54.899122  102835 kic.go:194] Starting extracting preloaded images to volume ...
	I0103 19:18:54.899176  102835 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17885-8915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-867906:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I0103 19:19:00.071827  102835 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17885-8915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-867906:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (5.172592323s)
	I0103 19:19:00.071859  102835 kic.go:203] duration metric: took 5.172735 seconds to extract preloaded images to volume
	W0103 19:19:00.071992  102835 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0103 19:19:00.072079  102835 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0103 19:19:00.122955  102835 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-867906 --name multinode-867906 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-867906 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-867906 --network multinode-867906 --ip 192.168.58.2 --volume multinode-867906:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0103 19:19:00.427389  102835 cli_runner.go:164] Run: docker container inspect multinode-867906 --format={{.State.Running}}
	I0103 19:19:00.444660  102835 cli_runner.go:164] Run: docker container inspect multinode-867906 --format={{.State.Status}}
	I0103 19:19:00.461942  102835 cli_runner.go:164] Run: docker exec multinode-867906 stat /var/lib/dpkg/alternatives/iptables
	I0103 19:19:00.532573  102835 oci.go:144] the created container "multinode-867906" has a running status.
	I0103 19:19:00.532606  102835 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17885-8915/.minikube/machines/multinode-867906/id_rsa...
	I0103 19:19:00.729115  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/machines/multinode-867906/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0103 19:19:00.729158  102835 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17885-8915/.minikube/machines/multinode-867906/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0103 19:19:00.748780  102835 cli_runner.go:164] Run: docker container inspect multinode-867906 --format={{.State.Status}}
	I0103 19:19:00.768446  102835 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0103 19:19:00.768465  102835 kic_runner.go:114] Args: [docker exec --privileged multinode-867906 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0103 19:19:00.824117  102835 cli_runner.go:164] Run: docker container inspect multinode-867906 --format={{.State.Status}}
	I0103 19:19:00.850764  102835 machine.go:88] provisioning docker machine ...
	I0103 19:19:00.850797  102835 ubuntu.go:169] provisioning hostname "multinode-867906"
	I0103 19:19:00.850850  102835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-867906
	I0103 19:19:00.868569  102835 main.go:141] libmachine: Using SSH client type: native
	I0103 19:19:00.868957  102835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0103 19:19:00.868978  102835 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-867906 && echo "multinode-867906" | sudo tee /etc/hostname
	I0103 19:19:00.869569  102835 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56612->127.0.0.1:32847: read: connection reset by peer
	I0103 19:19:04.000131  102835 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-867906
	
	I0103 19:19:04.000202  102835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-867906
	I0103 19:19:04.016464  102835 main.go:141] libmachine: Using SSH client type: native
	I0103 19:19:04.016796  102835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0103 19:19:04.016814  102835 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-867906' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-867906/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-867906' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 19:19:04.134224  102835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 19:19:04.134255  102835 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17885-8915/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-8915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-8915/.minikube}
	I0103 19:19:04.134278  102835 ubuntu.go:177] setting up certificates
	I0103 19:19:04.134287  102835 provision.go:83] configureAuth start
	I0103 19:19:04.134344  102835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-867906
	I0103 19:19:04.150202  102835 provision.go:138] copyHostCerts
	I0103 19:19:04.150246  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17885-8915/.minikube/ca.pem
	I0103 19:19:04.150281  102835 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-8915/.minikube/ca.pem, removing ...
	I0103 19:19:04.150295  102835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-8915/.minikube/ca.pem
	I0103 19:19:04.150374  102835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-8915/.minikube/ca.pem (1078 bytes)
	I0103 19:19:04.150466  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17885-8915/.minikube/cert.pem
	I0103 19:19:04.150492  102835 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-8915/.minikube/cert.pem, removing ...
	I0103 19:19:04.150502  102835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-8915/.minikube/cert.pem
	I0103 19:19:04.150540  102835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-8915/.minikube/cert.pem (1123 bytes)
	I0103 19:19:04.150598  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17885-8915/.minikube/key.pem
	I0103 19:19:04.150625  102835 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-8915/.minikube/key.pem, removing ...
	I0103 19:19:04.150634  102835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-8915/.minikube/key.pem
	I0103 19:19:04.150669  102835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-8915/.minikube/key.pem (1679 bytes)
	I0103 19:19:04.150732  102835 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-8915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca-key.pem org=jenkins.multinode-867906 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-867906]
	I0103 19:19:04.362714  102835 provision.go:172] copyRemoteCerts
	I0103 19:19:04.362780  102835 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 19:19:04.362817  102835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-867906
	I0103 19:19:04.379065  102835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/multinode-867906/id_rsa Username:docker}
	I0103 19:19:04.466154  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0103 19:19:04.466223  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 19:19:04.487463  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0103 19:19:04.487513  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0103 19:19:04.508488  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0103 19:19:04.508545  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 19:19:04.529125  102835 provision.go:86] duration metric: configureAuth took 394.827512ms
	I0103 19:19:04.529153  102835 ubuntu.go:193] setting minikube options for container-runtime
	I0103 19:19:04.529310  102835 config.go:182] Loaded profile config "multinode-867906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:19:04.529395  102835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-867906
	I0103 19:19:04.546083  102835 main.go:141] libmachine: Using SSH client type: native
	I0103 19:19:04.546543  102835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0103 19:19:04.546565  102835 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 19:19:04.752588  102835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 19:19:04.752613  102835 machine.go:91] provisioned docker machine in 3.901828395s
	I0103 19:19:04.752624  102835 client.go:171] LocalClient.Create took 10.527074635s
	I0103 19:19:04.752639  102835 start.go:167] duration metric: libmachine.API.Create for "multinode-867906" took 10.527122192s
	I0103 19:19:04.752648  102835 start.go:300] post-start starting for "multinode-867906" (driver="docker")
	I0103 19:19:04.752660  102835 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 19:19:04.752737  102835 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 19:19:04.752782  102835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-867906
	I0103 19:19:04.768989  102835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/multinode-867906/id_rsa Username:docker}
	I0103 19:19:04.854859  102835 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 19:19:04.857848  102835 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0103 19:19:04.857873  102835 command_runner.go:130] > NAME="Ubuntu"
	I0103 19:19:04.857880  102835 command_runner.go:130] > VERSION_ID="22.04"
	I0103 19:19:04.857885  102835 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0103 19:19:04.857890  102835 command_runner.go:130] > VERSION_CODENAME=jammy
	I0103 19:19:04.857896  102835 command_runner.go:130] > ID=ubuntu
	I0103 19:19:04.857902  102835 command_runner.go:130] > ID_LIKE=debian
	I0103 19:19:04.857909  102835 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0103 19:19:04.857921  102835 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0103 19:19:04.857932  102835 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0103 19:19:04.857975  102835 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0103 19:19:04.857987  102835 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0103 19:19:04.858046  102835 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0103 19:19:04.858081  102835 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0103 19:19:04.858095  102835 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0103 19:19:04.858104  102835 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0103 19:19:04.858120  102835 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-8915/.minikube/addons for local assets ...
	I0103 19:19:04.858230  102835 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-8915/.minikube/files for local assets ...
	I0103 19:19:04.858353  102835 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/ssl/certs/156702.pem -> 156702.pem in /etc/ssl/certs
	I0103 19:19:04.858380  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/ssl/certs/156702.pem -> /etc/ssl/certs/156702.pem
	I0103 19:19:04.858496  102835 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 19:19:04.866389  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/ssl/certs/156702.pem --> /etc/ssl/certs/156702.pem (1708 bytes)
	I0103 19:19:04.888208  102835 start.go:303] post-start completed in 135.547001ms
	I0103 19:19:04.888580  102835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-867906
	I0103 19:19:04.904537  102835 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/config.json ...
	I0103 19:19:04.904820  102835 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 19:19:04.904872  102835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-867906
	I0103 19:19:04.921170  102835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/multinode-867906/id_rsa Username:docker}
	I0103 19:19:05.006466  102835 command_runner.go:130] > 25%!
	(MISSING)I0103 19:19:05.006673  102835 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0103 19:19:05.010374  102835 command_runner.go:130] > 220G
	I0103 19:19:05.010572  102835 start.go:128] duration metric: createHost completed in 10.787473449s
	I0103 19:19:05.010592  102835 start.go:83] releasing machines lock for "multinode-867906", held for 10.787606144s
	I0103 19:19:05.010650  102835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-867906
	I0103 19:19:05.026803  102835 ssh_runner.go:195] Run: cat /version.json
	I0103 19:19:05.026847  102835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-867906
	I0103 19:19:05.026886  102835 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 19:19:05.026960  102835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-867906
	I0103 19:19:05.043338  102835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/multinode-867906/id_rsa Username:docker}
	I0103 19:19:05.043638  102835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/multinode-867906/id_rsa Username:docker}
	I0103 19:19:05.125908  102835 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1703498848-17857", "minikube_version": "v1.32.0", "commit": "d18dc8d014b22564d2860ddb02a821a21df70433"}
	I0103 19:19:05.126018  102835 ssh_runner.go:195] Run: systemctl --version
	I0103 19:19:05.207894  102835 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0103 19:19:05.210106  102835 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I0103 19:19:05.210158  102835 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0103 19:19:05.210225  102835 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 19:19:05.345814  102835 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 19:19:05.349726  102835 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0103 19:19:05.349750  102835 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0103 19:19:05.349757  102835 command_runner.go:130] > Device: 34h/52d	Inode: 577599      Links: 1
	I0103 19:19:05.349765  102835 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0103 19:19:05.349773  102835 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0103 19:19:05.349782  102835 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0103 19:19:05.349791  102835 command_runner.go:130] > Change: 2024-01-03 18:59:21.794240194 +0000
	I0103 19:19:05.349800  102835 command_runner.go:130] >  Birth: 2024-01-03 18:59:21.794240194 +0000
	I0103 19:19:05.349930  102835 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 19:19:05.367621  102835 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0103 19:19:05.367719  102835 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 19:19:05.394214  102835 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0103 19:19:05.394260  102835 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0103 19:19:05.394270  102835 start.go:475] detecting cgroup driver to use...
	I0103 19:19:05.394306  102835 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 19:19:05.394361  102835 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 19:19:05.407689  102835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 19:19:05.417296  102835 docker.go:203] disabling cri-docker service (if available) ...
	I0103 19:19:05.417351  102835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 19:19:05.428895  102835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 19:19:05.440900  102835 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 19:19:05.514700  102835 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 19:19:05.595748  102835 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0103 19:19:05.595792  102835 docker.go:219] disabling docker service ...
	I0103 19:19:05.595849  102835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 19:19:05.612660  102835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 19:19:05.622677  102835 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 19:19:05.633157  102835 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0103 19:19:05.702761  102835 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 19:19:05.779792  102835 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0103 19:19:05.779862  102835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 19:19:05.790367  102835 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 19:19:05.805659  102835 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0103 19:19:05.805712  102835 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 19:19:05.805756  102835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:19:05.815176  102835 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 19:19:05.815238  102835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:19:05.824252  102835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:19:05.832866  102835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:19:05.841734  102835 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 19:19:05.849908  102835 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 19:19:05.856438  102835 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0103 19:19:05.857139  102835 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 19:19:05.864566  102835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 19:19:05.933712  102835 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 19:19:06.035411  102835 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 19:19:06.035497  102835 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 19:19:06.039157  102835 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0103 19:19:06.039179  102835 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0103 19:19:06.039186  102835 command_runner.go:130] > Device: 40h/64d	Inode: 190         Links: 1
	I0103 19:19:06.039193  102835 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0103 19:19:06.039199  102835 command_runner.go:130] > Access: 2024-01-03 19:19:06.018733928 +0000
	I0103 19:19:06.039205  102835 command_runner.go:130] > Modify: 2024-01-03 19:19:06.018733928 +0000
	I0103 19:19:06.039209  102835 command_runner.go:130] > Change: 2024-01-03 19:19:06.022734227 +0000
	I0103 19:19:06.039213  102835 command_runner.go:130] >  Birth: -
	I0103 19:19:06.039237  102835 start.go:543] Will wait 60s for crictl version
	I0103 19:19:06.039306  102835 ssh_runner.go:195] Run: which crictl
	I0103 19:19:06.042228  102835 command_runner.go:130] > /usr/bin/crictl
	I0103 19:19:06.042392  102835 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 19:19:06.072986  102835 command_runner.go:130] > Version:  0.1.0
	I0103 19:19:06.073009  102835 command_runner.go:130] > RuntimeName:  cri-o
	I0103 19:19:06.073014  102835 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0103 19:19:06.073019  102835 command_runner.go:130] > RuntimeApiVersion:  v1
	I0103 19:19:06.075131  102835 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0103 19:19:06.075213  102835 ssh_runner.go:195] Run: crio --version
	I0103 19:19:06.106645  102835 command_runner.go:130] > crio version 1.24.6
	I0103 19:19:06.106667  102835 command_runner.go:130] > Version:          1.24.6
	I0103 19:19:06.106676  102835 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0103 19:19:06.106682  102835 command_runner.go:130] > GitTreeState:     clean
	I0103 19:19:06.106690  102835 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0103 19:19:06.106697  102835 command_runner.go:130] > GoVersion:        go1.18.2
	I0103 19:19:06.106703  102835 command_runner.go:130] > Compiler:         gc
	I0103 19:19:06.106711  102835 command_runner.go:130] > Platform:         linux/amd64
	I0103 19:19:06.106727  102835 command_runner.go:130] > Linkmode:         dynamic
	I0103 19:19:06.106745  102835 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0103 19:19:06.106756  102835 command_runner.go:130] > SeccompEnabled:   true
	I0103 19:19:06.106767  102835 command_runner.go:130] > AppArmorEnabled:  false
	I0103 19:19:06.108630  102835 ssh_runner.go:195] Run: crio --version
	I0103 19:19:06.139139  102835 command_runner.go:130] > crio version 1.24.6
	I0103 19:19:06.139158  102835 command_runner.go:130] > Version:          1.24.6
	I0103 19:19:06.139166  102835 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0103 19:19:06.139170  102835 command_runner.go:130] > GitTreeState:     clean
	I0103 19:19:06.139176  102835 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0103 19:19:06.139181  102835 command_runner.go:130] > GoVersion:        go1.18.2
	I0103 19:19:06.139188  102835 command_runner.go:130] > Compiler:         gc
	I0103 19:19:06.139192  102835 command_runner.go:130] > Platform:         linux/amd64
	I0103 19:19:06.139197  102835 command_runner.go:130] > Linkmode:         dynamic
	I0103 19:19:06.139205  102835 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0103 19:19:06.139212  102835 command_runner.go:130] > SeccompEnabled:   true
	I0103 19:19:06.139216  102835 command_runner.go:130] > AppArmorEnabled:  false
	I0103 19:19:06.142867  102835 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0103 19:19:06.144268  102835 cli_runner.go:164] Run: docker network inspect multinode-867906 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0103 19:19:06.161398  102835 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0103 19:19:06.164791  102835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 19:19:06.174247  102835 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 19:19:06.174296  102835 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 19:19:06.227204  102835 command_runner.go:130] > {
	I0103 19:19:06.227233  102835 command_runner.go:130] >   "images": [
	I0103 19:19:06.227240  102835 command_runner.go:130] >     {
	I0103 19:19:06.227253  102835 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0103 19:19:06.227261  102835 command_runner.go:130] >       "repoTags": [
	I0103 19:19:06.227274  102835 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0103 19:19:06.227284  102835 command_runner.go:130] >       ],
	I0103 19:19:06.227292  102835 command_runner.go:130] >       "repoDigests": [
	I0103 19:19:06.227309  102835 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0103 19:19:06.227323  102835 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0103 19:19:06.227329  102835 command_runner.go:130] >       ],
	I0103 19:19:06.227334  102835 command_runner.go:130] >       "size": "65258016",
	I0103 19:19:06.227340  102835 command_runner.go:130] >       "uid": null,
	I0103 19:19:06.227345  102835 command_runner.go:130] >       "username": "",
	I0103 19:19:06.227352  102835 command_runner.go:130] >       "spec": null,
	I0103 19:19:06.227357  102835 command_runner.go:130] >       "pinned": false
	I0103 19:19:06.227363  102835 command_runner.go:130] >     },
	I0103 19:19:06.227366  102835 command_runner.go:130] >     {
	I0103 19:19:06.227372  102835 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0103 19:19:06.227378  102835 command_runner.go:130] >       "repoTags": [
	I0103 19:19:06.227384  102835 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0103 19:19:06.227388  102835 command_runner.go:130] >       ],
	I0103 19:19:06.227394  102835 command_runner.go:130] >       "repoDigests": [
	I0103 19:19:06.227407  102835 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0103 19:19:06.227417  102835 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0103 19:19:06.227424  102835 command_runner.go:130] >       ],
	I0103 19:19:06.227430  102835 command_runner.go:130] >       "size": "31470524",
	I0103 19:19:06.227436  102835 command_runner.go:130] >       "uid": null,
	I0103 19:19:06.227440  102835 command_runner.go:130] >       "username": "",
	I0103 19:19:06.227447  102835 command_runner.go:130] >       "spec": null,
	I0103 19:19:06.227451  102835 command_runner.go:130] >       "pinned": false
	I0103 19:19:06.227457  102835 command_runner.go:130] >     },
	I0103 19:19:06.227461  102835 command_runner.go:130] >     {
	I0103 19:19:06.227469  102835 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0103 19:19:06.227474  102835 command_runner.go:130] >       "repoTags": [
	I0103 19:19:06.227479  102835 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0103 19:19:06.227483  102835 command_runner.go:130] >       ],
	I0103 19:19:06.227488  102835 command_runner.go:130] >       "repoDigests": [
	I0103 19:19:06.227497  102835 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0103 19:19:06.227505  102835 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0103 19:19:06.227511  102835 command_runner.go:130] >       ],
	I0103 19:19:06.227517  102835 command_runner.go:130] >       "size": "53621675",
	I0103 19:19:06.227523  102835 command_runner.go:130] >       "uid": null,
	I0103 19:19:06.227528  102835 command_runner.go:130] >       "username": "",
	I0103 19:19:06.227543  102835 command_runner.go:130] >       "spec": null,
	I0103 19:19:06.227550  102835 command_runner.go:130] >       "pinned": false
	I0103 19:19:06.227553  102835 command_runner.go:130] >     },
	I0103 19:19:06.227557  102835 command_runner.go:130] >     {
	I0103 19:19:06.227563  102835 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0103 19:19:06.227569  102835 command_runner.go:130] >       "repoTags": [
	I0103 19:19:06.227575  102835 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0103 19:19:06.227578  102835 command_runner.go:130] >       ],
	I0103 19:19:06.227584  102835 command_runner.go:130] >       "repoDigests": [
	I0103 19:19:06.227591  102835 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0103 19:19:06.227600  102835 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0103 19:19:06.227609  102835 command_runner.go:130] >       ],
	I0103 19:19:06.227616  102835 command_runner.go:130] >       "size": "295456551",
	I0103 19:19:06.227620  102835 command_runner.go:130] >       "uid": {
	I0103 19:19:06.227625  102835 command_runner.go:130] >         "value": "0"
	I0103 19:19:06.227630  102835 command_runner.go:130] >       },
	I0103 19:19:06.227637  102835 command_runner.go:130] >       "username": "",
	I0103 19:19:06.227641  102835 command_runner.go:130] >       "spec": null,
	I0103 19:19:06.227648  102835 command_runner.go:130] >       "pinned": false
	I0103 19:19:06.227652  102835 command_runner.go:130] >     },
	I0103 19:19:06.227657  102835 command_runner.go:130] >     {
	I0103 19:19:06.227663  102835 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0103 19:19:06.227670  102835 command_runner.go:130] >       "repoTags": [
	I0103 19:19:06.227675  102835 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0103 19:19:06.227679  102835 command_runner.go:130] >       ],
	I0103 19:19:06.227685  102835 command_runner.go:130] >       "repoDigests": [
	I0103 19:19:06.227692  102835 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0103 19:19:06.227702  102835 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0103 19:19:06.227706  102835 command_runner.go:130] >       ],
	I0103 19:19:06.227710  102835 command_runner.go:130] >       "size": "127226832",
	I0103 19:19:06.227715  102835 command_runner.go:130] >       "uid": {
	I0103 19:19:06.227720  102835 command_runner.go:130] >         "value": "0"
	I0103 19:19:06.227733  102835 command_runner.go:130] >       },
	I0103 19:19:06.227741  102835 command_runner.go:130] >       "username": "",
	I0103 19:19:06.227746  102835 command_runner.go:130] >       "spec": null,
	I0103 19:19:06.227752  102835 command_runner.go:130] >       "pinned": false
	I0103 19:19:06.227756  102835 command_runner.go:130] >     },
	I0103 19:19:06.227762  102835 command_runner.go:130] >     {
	I0103 19:19:06.227771  102835 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0103 19:19:06.227777  102835 command_runner.go:130] >       "repoTags": [
	I0103 19:19:06.227782  102835 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0103 19:19:06.227789  102835 command_runner.go:130] >       ],
	I0103 19:19:06.227793  102835 command_runner.go:130] >       "repoDigests": [
	I0103 19:19:06.227803  102835 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0103 19:19:06.227811  102835 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0103 19:19:06.227816  102835 command_runner.go:130] >       ],
	I0103 19:19:06.227821  102835 command_runner.go:130] >       "size": "123261750",
	I0103 19:19:06.227827  102835 command_runner.go:130] >       "uid": {
	I0103 19:19:06.227831  102835 command_runner.go:130] >         "value": "0"
	I0103 19:19:06.227837  102835 command_runner.go:130] >       },
	I0103 19:19:06.227841  102835 command_runner.go:130] >       "username": "",
	I0103 19:19:06.227848  102835 command_runner.go:130] >       "spec": null,
	I0103 19:19:06.227854  102835 command_runner.go:130] >       "pinned": false
	I0103 19:19:06.227857  102835 command_runner.go:130] >     },
	I0103 19:19:06.227861  102835 command_runner.go:130] >     {
	I0103 19:19:06.227867  102835 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0103 19:19:06.227871  102835 command_runner.go:130] >       "repoTags": [
	I0103 19:19:06.227877  102835 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0103 19:19:06.227882  102835 command_runner.go:130] >       ],
	I0103 19:19:06.227886  102835 command_runner.go:130] >       "repoDigests": [
	I0103 19:19:06.227894  102835 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0103 19:19:06.227903  102835 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0103 19:19:06.227906  102835 command_runner.go:130] >       ],
	I0103 19:19:06.227911  102835 command_runner.go:130] >       "size": "74749335",
	I0103 19:19:06.227916  102835 command_runner.go:130] >       "uid": null,
	I0103 19:19:06.227921  102835 command_runner.go:130] >       "username": "",
	I0103 19:19:06.227927  102835 command_runner.go:130] >       "spec": null,
	I0103 19:19:06.227932  102835 command_runner.go:130] >       "pinned": false
	I0103 19:19:06.227937  102835 command_runner.go:130] >     },
	I0103 19:19:06.227942  102835 command_runner.go:130] >     {
	I0103 19:19:06.227950  102835 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0103 19:19:06.227954  102835 command_runner.go:130] >       "repoTags": [
	I0103 19:19:06.227959  102835 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0103 19:19:06.227965  102835 command_runner.go:130] >       ],
	I0103 19:19:06.227969  102835 command_runner.go:130] >       "repoDigests": [
	I0103 19:19:06.227989  102835 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0103 19:19:06.227998  102835 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0103 19:19:06.228002  102835 command_runner.go:130] >       ],
	I0103 19:19:06.228007  102835 command_runner.go:130] >       "size": "61551410",
	I0103 19:19:06.228013  102835 command_runner.go:130] >       "uid": {
	I0103 19:19:06.228017  102835 command_runner.go:130] >         "value": "0"
	I0103 19:19:06.228021  102835 command_runner.go:130] >       },
	I0103 19:19:06.228025  102835 command_runner.go:130] >       "username": "",
	I0103 19:19:06.228031  102835 command_runner.go:130] >       "spec": null,
	I0103 19:19:06.228035  102835 command_runner.go:130] >       "pinned": false
	I0103 19:19:06.228041  102835 command_runner.go:130] >     },
	I0103 19:19:06.228044  102835 command_runner.go:130] >     {
	I0103 19:19:06.228054  102835 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0103 19:19:06.228061  102835 command_runner.go:130] >       "repoTags": [
	I0103 19:19:06.228065  102835 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0103 19:19:06.228071  102835 command_runner.go:130] >       ],
	I0103 19:19:06.228075  102835 command_runner.go:130] >       "repoDigests": [
	I0103 19:19:06.228085  102835 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0103 19:19:06.228092  102835 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0103 19:19:06.228098  102835 command_runner.go:130] >       ],
	I0103 19:19:06.228102  102835 command_runner.go:130] >       "size": "750414",
	I0103 19:19:06.228108  102835 command_runner.go:130] >       "uid": {
	I0103 19:19:06.228113  102835 command_runner.go:130] >         "value": "65535"
	I0103 19:19:06.228119  102835 command_runner.go:130] >       },
	I0103 19:19:06.228123  102835 command_runner.go:130] >       "username": "",
	I0103 19:19:06.228131  102835 command_runner.go:130] >       "spec": null,
	I0103 19:19:06.228137  102835 command_runner.go:130] >       "pinned": false
	I0103 19:19:06.228140  102835 command_runner.go:130] >     }
	I0103 19:19:06.228144  102835 command_runner.go:130] >   ]
	I0103 19:19:06.228147  102835 command_runner.go:130] > }
	I0103 19:19:06.228319  102835 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 19:19:06.228333  102835 crio.go:415] Images already preloaded, skipping extraction
	I0103 19:19:06.228374  102835 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 19:19:06.258834  102835 command_runner.go:130] > {
	I0103 19:19:06.258854  102835 command_runner.go:130] >   "images": [
	I0103 19:19:06.258858  102835 command_runner.go:130] >     {
	I0103 19:19:06.258866  102835 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0103 19:19:06.258871  102835 command_runner.go:130] >       "repoTags": [
	I0103 19:19:06.258877  102835 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0103 19:19:06.258881  102835 command_runner.go:130] >       ],
	I0103 19:19:06.258887  102835 command_runner.go:130] >       "repoDigests": [
	I0103 19:19:06.258900  102835 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0103 19:19:06.258913  102835 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0103 19:19:06.258923  102835 command_runner.go:130] >       ],
	I0103 19:19:06.258934  102835 command_runner.go:130] >       "size": "65258016",
	I0103 19:19:06.258940  102835 command_runner.go:130] >       "uid": null,
	I0103 19:19:06.258945  102835 command_runner.go:130] >       "username": "",
	I0103 19:19:06.258958  102835 command_runner.go:130] >       "spec": null,
	I0103 19:19:06.258965  102835 command_runner.go:130] >       "pinned": false
	I0103 19:19:06.258969  102835 command_runner.go:130] >     },
	I0103 19:19:06.258975  102835 command_runner.go:130] >     {
	I0103 19:19:06.258981  102835 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0103 19:19:06.258985  102835 command_runner.go:130] >       "repoTags": [
	I0103 19:19:06.258996  102835 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0103 19:19:06.259003  102835 command_runner.go:130] >       ],
	I0103 19:19:06.259010  102835 command_runner.go:130] >       "repoDigests": [
	I0103 19:19:06.259023  102835 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0103 19:19:06.259032  102835 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0103 19:19:06.259035  102835 command_runner.go:130] >       ],
	I0103 19:19:06.259043  102835 command_runner.go:130] >       "size": "31470524",
	I0103 19:19:06.259047  102835 command_runner.go:130] >       "uid": null,
	I0103 19:19:06.259051  102835 command_runner.go:130] >       "username": "",
	I0103 19:19:06.259059  102835 command_runner.go:130] >       "spec": null,
	I0103 19:19:06.259064  102835 command_runner.go:130] >       "pinned": false
	I0103 19:19:06.259067  102835 command_runner.go:130] >     },
	I0103 19:19:06.259070  102835 command_runner.go:130] >     {
	I0103 19:19:06.259077  102835 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0103 19:19:06.259088  102835 command_runner.go:130] >       "repoTags": [
	I0103 19:19:06.259101  102835 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0103 19:19:06.259111  102835 command_runner.go:130] >       ],
	I0103 19:19:06.259121  102835 command_runner.go:130] >       "repoDigests": [
	I0103 19:19:06.259137  102835 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0103 19:19:06.259147  102835 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0103 19:19:06.259153  102835 command_runner.go:130] >       ],
	I0103 19:19:06.259157  102835 command_runner.go:130] >       "size": "53621675",
	I0103 19:19:06.259165  102835 command_runner.go:130] >       "uid": null,
	I0103 19:19:06.259172  102835 command_runner.go:130] >       "username": "",
	I0103 19:19:06.259178  102835 command_runner.go:130] >       "spec": null,
	I0103 19:19:06.259188  102835 command_runner.go:130] >       "pinned": false
	I0103 19:19:06.259198  102835 command_runner.go:130] >     },
	I0103 19:19:06.259207  102835 command_runner.go:130] >     {
	I0103 19:19:06.259221  102835 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0103 19:19:06.259231  102835 command_runner.go:130] >       "repoTags": [
	I0103 19:19:06.259242  102835 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0103 19:19:06.259251  102835 command_runner.go:130] >       ],
	I0103 19:19:06.259259  102835 command_runner.go:130] >       "repoDigests": [
	I0103 19:19:06.259266  102835 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0103 19:19:06.259276  102835 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0103 19:19:06.259294  102835 command_runner.go:130] >       ],
	I0103 19:19:06.259310  102835 command_runner.go:130] >       "size": "295456551",
	I0103 19:19:06.259317  102835 command_runner.go:130] >       "uid": {
	I0103 19:19:06.259324  102835 command_runner.go:130] >         "value": "0"
	I0103 19:19:06.259334  102835 command_runner.go:130] >       },
	I0103 19:19:06.259344  102835 command_runner.go:130] >       "username": "",
	I0103 19:19:06.259354  102835 command_runner.go:130] >       "spec": null,
	I0103 19:19:06.259364  102835 command_runner.go:130] >       "pinned": false
	I0103 19:19:06.259372  102835 command_runner.go:130] >     },
	I0103 19:19:06.259381  102835 command_runner.go:130] >     {
	I0103 19:19:06.259397  102835 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0103 19:19:06.259407  102835 command_runner.go:130] >       "repoTags": [
	I0103 19:19:06.259419  102835 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0103 19:19:06.259429  102835 command_runner.go:130] >       ],
	I0103 19:19:06.259439  102835 command_runner.go:130] >       "repoDigests": [
	I0103 19:19:06.259454  102835 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0103 19:19:06.259470  102835 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0103 19:19:06.259485  102835 command_runner.go:130] >       ],
	I0103 19:19:06.259492  102835 command_runner.go:130] >       "size": "127226832",
	I0103 19:19:06.259498  102835 command_runner.go:130] >       "uid": {
	I0103 19:19:06.259509  102835 command_runner.go:130] >         "value": "0"
	I0103 19:19:06.259518  102835 command_runner.go:130] >       },
	I0103 19:19:06.259526  102835 command_runner.go:130] >       "username": "",
	I0103 19:19:06.259536  102835 command_runner.go:130] >       "spec": null,
	I0103 19:19:06.259546  102835 command_runner.go:130] >       "pinned": false
	I0103 19:19:06.259555  102835 command_runner.go:130] >     },
	I0103 19:19:06.259564  102835 command_runner.go:130] >     {
	I0103 19:19:06.259577  102835 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0103 19:19:06.259588  102835 command_runner.go:130] >       "repoTags": [
	I0103 19:19:06.259598  102835 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0103 19:19:06.259608  102835 command_runner.go:130] >       ],
	I0103 19:19:06.259619  102835 command_runner.go:130] >       "repoDigests": [
	I0103 19:19:06.259632  102835 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0103 19:19:06.259647  102835 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0103 19:19:06.259656  102835 command_runner.go:130] >       ],
	I0103 19:19:06.259667  102835 command_runner.go:130] >       "size": "123261750",
	I0103 19:19:06.259676  102835 command_runner.go:130] >       "uid": {
	I0103 19:19:06.259685  102835 command_runner.go:130] >         "value": "0"
	I0103 19:19:06.259691  102835 command_runner.go:130] >       },
	I0103 19:19:06.259697  102835 command_runner.go:130] >       "username": "",
	I0103 19:19:06.259707  102835 command_runner.go:130] >       "spec": null,
	I0103 19:19:06.259718  102835 command_runner.go:130] >       "pinned": false
	I0103 19:19:06.259725  102835 command_runner.go:130] >     },
	I0103 19:19:06.259734  102835 command_runner.go:130] >     {
	I0103 19:19:06.259747  102835 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0103 19:19:06.259756  102835 command_runner.go:130] >       "repoTags": [
	I0103 19:19:06.259772  102835 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0103 19:19:06.259780  102835 command_runner.go:130] >       ],
	I0103 19:19:06.259788  102835 command_runner.go:130] >       "repoDigests": [
	I0103 19:19:06.259797  102835 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0103 19:19:06.259813  102835 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0103 19:19:06.259822  102835 command_runner.go:130] >       ],
	I0103 19:19:06.259830  102835 command_runner.go:130] >       "size": "74749335",
	I0103 19:19:06.259840  102835 command_runner.go:130] >       "uid": null,
	I0103 19:19:06.259850  102835 command_runner.go:130] >       "username": "",
	I0103 19:19:06.259859  102835 command_runner.go:130] >       "spec": null,
	I0103 19:19:06.259869  102835 command_runner.go:130] >       "pinned": false
	I0103 19:19:06.259877  102835 command_runner.go:130] >     },
	I0103 19:19:06.259886  102835 command_runner.go:130] >     {
	I0103 19:19:06.259896  102835 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0103 19:19:06.259906  102835 command_runner.go:130] >       "repoTags": [
	I0103 19:19:06.259917  102835 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0103 19:19:06.259927  102835 command_runner.go:130] >       ],
	I0103 19:19:06.259936  102835 command_runner.go:130] >       "repoDigests": [
	I0103 19:19:06.259969  102835 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0103 19:19:06.259985  102835 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0103 19:19:06.259992  102835 command_runner.go:130] >       ],
	I0103 19:19:06.260002  102835 command_runner.go:130] >       "size": "61551410",
	I0103 19:19:06.260012  102835 command_runner.go:130] >       "uid": {
	I0103 19:19:06.260020  102835 command_runner.go:130] >         "value": "0"
	I0103 19:19:06.260029  102835 command_runner.go:130] >       },
	I0103 19:19:06.260039  102835 command_runner.go:130] >       "username": "",
	I0103 19:19:06.260049  102835 command_runner.go:130] >       "spec": null,
	I0103 19:19:06.260059  102835 command_runner.go:130] >       "pinned": false
	I0103 19:19:06.260068  102835 command_runner.go:130] >     },
	I0103 19:19:06.260076  102835 command_runner.go:130] >     {
	I0103 19:19:06.260091  102835 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0103 19:19:06.260102  102835 command_runner.go:130] >       "repoTags": [
	I0103 19:19:06.260113  102835 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0103 19:19:06.260123  102835 command_runner.go:130] >       ],
	I0103 19:19:06.260133  102835 command_runner.go:130] >       "repoDigests": [
	I0103 19:19:06.260147  102835 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0103 19:19:06.260173  102835 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0103 19:19:06.260180  102835 command_runner.go:130] >       ],
	I0103 19:19:06.260185  102835 command_runner.go:130] >       "size": "750414",
	I0103 19:19:06.260194  102835 command_runner.go:130] >       "uid": {
	I0103 19:19:06.260204  102835 command_runner.go:130] >         "value": "65535"
	I0103 19:19:06.260214  102835 command_runner.go:130] >       },
	I0103 19:19:06.260221  102835 command_runner.go:130] >       "username": "",
	I0103 19:19:06.260231  102835 command_runner.go:130] >       "spec": null,
	I0103 19:19:06.260241  102835 command_runner.go:130] >       "pinned": false
	I0103 19:19:06.260247  102835 command_runner.go:130] >     }
	I0103 19:19:06.260256  102835 command_runner.go:130] >   ]
	I0103 19:19:06.260269  102835 command_runner.go:130] > }
	I0103 19:19:06.260403  102835 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 19:19:06.260415  102835 cache_images.go:84] Images are preloaded, skipping loading
	I0103 19:19:06.260486  102835 ssh_runner.go:195] Run: crio config
	I0103 19:19:06.298277  102835 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0103 19:19:06.298309  102835 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0103 19:19:06.298326  102835 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0103 19:19:06.298329  102835 command_runner.go:130] > #
	I0103 19:19:06.298336  102835 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0103 19:19:06.298347  102835 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0103 19:19:06.298357  102835 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0103 19:19:06.298373  102835 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0103 19:19:06.298382  102835 command_runner.go:130] > # reload'.
	I0103 19:19:06.298393  102835 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0103 19:19:06.298406  102835 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0103 19:19:06.298419  102835 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0103 19:19:06.298430  102835 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0103 19:19:06.298439  102835 command_runner.go:130] > [crio]
	I0103 19:19:06.298450  102835 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0103 19:19:06.298462  102835 command_runner.go:130] > # containers images, in this directory.
	I0103 19:19:06.298476  102835 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0103 19:19:06.298489  102835 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0103 19:19:06.298501  102835 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0103 19:19:06.298511  102835 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0103 19:19:06.298529  102835 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0103 19:19:06.298542  102835 command_runner.go:130] > # storage_driver = "vfs"
	I0103 19:19:06.298552  102835 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0103 19:19:06.298570  102835 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0103 19:19:06.298580  102835 command_runner.go:130] > # storage_option = [
	I0103 19:19:06.298586  102835 command_runner.go:130] > # ]
	I0103 19:19:06.298599  102835 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0103 19:19:06.298613  102835 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0103 19:19:06.298621  102835 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0103 19:19:06.298629  102835 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0103 19:19:06.298642  102835 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0103 19:19:06.298650  102835 command_runner.go:130] > # always happen on a node reboot
	I0103 19:19:06.298663  102835 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0103 19:19:06.298672  102835 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0103 19:19:06.298685  102835 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0103 19:19:06.298706  102835 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0103 19:19:06.298723  102835 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0103 19:19:06.298736  102835 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0103 19:19:06.298753  102835 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0103 19:19:06.298761  102835 command_runner.go:130] > # internal_wipe = true
	I0103 19:19:06.298771  102835 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0103 19:19:06.298784  102835 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0103 19:19:06.298797  102835 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0103 19:19:06.298809  102835 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0103 19:19:06.298823  102835 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0103 19:19:06.298833  102835 command_runner.go:130] > [crio.api]
	I0103 19:19:06.298844  102835 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0103 19:19:06.298855  102835 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0103 19:19:06.298867  102835 command_runner.go:130] > # IP address on which the stream server will listen.
	I0103 19:19:06.298879  102835 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0103 19:19:06.298893  102835 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0103 19:19:06.298900  102835 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0103 19:19:06.298911  102835 command_runner.go:130] > # stream_port = "0"
	I0103 19:19:06.298920  102835 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0103 19:19:06.298930  102835 command_runner.go:130] > # stream_enable_tls = false
	I0103 19:19:06.298943  102835 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0103 19:19:06.298958  102835 command_runner.go:130] > # stream_idle_timeout = ""
	I0103 19:19:06.298970  102835 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0103 19:19:06.298984  102835 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0103 19:19:06.298994  102835 command_runner.go:130] > # minutes.
	I0103 19:19:06.299002  102835 command_runner.go:130] > # stream_tls_cert = ""
	I0103 19:19:06.299012  102835 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0103 19:19:06.299025  102835 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0103 19:19:06.299035  102835 command_runner.go:130] > # stream_tls_key = ""
	I0103 19:19:06.299045  102835 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0103 19:19:06.299058  102835 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0103 19:19:06.299067  102835 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0103 19:19:06.299077  102835 command_runner.go:130] > # stream_tls_ca = ""
	I0103 19:19:06.299089  102835 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0103 19:19:06.299100  102835 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0103 19:19:06.299112  102835 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0103 19:19:06.299123  102835 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0103 19:19:06.299162  102835 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0103 19:19:06.299179  102835 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0103 19:19:06.299192  102835 command_runner.go:130] > [crio.runtime]
	I0103 19:19:06.299206  102835 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0103 19:19:06.299215  102835 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0103 19:19:06.299227  102835 command_runner.go:130] > # "nofile=1024:2048"
	I0103 19:19:06.299237  102835 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0103 19:19:06.299244  102835 command_runner.go:130] > # default_ulimits = [
	I0103 19:19:06.299251  102835 command_runner.go:130] > # ]
	I0103 19:19:06.299261  102835 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0103 19:19:06.299268  102835 command_runner.go:130] > # no_pivot = false
	I0103 19:19:06.299277  102835 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0103 19:19:06.299286  102835 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0103 19:19:06.299292  102835 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0103 19:19:06.299301  102835 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0103 19:19:06.299309  102835 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0103 19:19:06.299323  102835 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0103 19:19:06.299330  102835 command_runner.go:130] > # conmon = ""
	I0103 19:19:06.299337  102835 command_runner.go:130] > # Cgroup setting for conmon
	I0103 19:19:06.299349  102835 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0103 19:19:06.299360  102835 command_runner.go:130] > conmon_cgroup = "pod"
	I0103 19:19:06.299370  102835 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0103 19:19:06.299384  102835 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0103 19:19:06.299396  102835 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0103 19:19:06.299403  102835 command_runner.go:130] > # conmon_env = [
	I0103 19:19:06.299409  102835 command_runner.go:130] > # ]
	I0103 19:19:06.299417  102835 command_runner.go:130] > # Additional environment variables to set for all the
	I0103 19:19:06.299427  102835 command_runner.go:130] > # containers. These are overridden if set in the
	I0103 19:19:06.299436  102835 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0103 19:19:06.299443  102835 command_runner.go:130] > # default_env = [
	I0103 19:19:06.299449  102835 command_runner.go:130] > # ]
	I0103 19:19:06.299463  102835 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0103 19:19:06.299470  102835 command_runner.go:130] > # selinux = false
	I0103 19:19:06.299480  102835 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0103 19:19:06.299491  102835 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0103 19:19:06.299500  102835 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0103 19:19:06.299510  102835 command_runner.go:130] > # seccomp_profile = ""
	I0103 19:19:06.299528  102835 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0103 19:19:06.299540  102835 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0103 19:19:06.299550  102835 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0103 19:19:06.299556  102835 command_runner.go:130] > # which might increase security.
	I0103 19:19:06.299567  102835 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0103 19:19:06.299574  102835 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0103 19:19:06.299580  102835 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0103 19:19:06.299588  102835 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0103 19:19:06.299594  102835 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0103 19:19:06.299598  102835 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:19:06.299603  102835 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0103 19:19:06.299608  102835 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0103 19:19:06.299612  102835 command_runner.go:130] > # the cgroup blockio controller.
	I0103 19:19:06.299616  102835 command_runner.go:130] > # blockio_config_file = ""
	I0103 19:19:06.299622  102835 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0103 19:19:06.299626  102835 command_runner.go:130] > # irqbalance daemon.
	I0103 19:19:06.299631  102835 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0103 19:19:06.299638  102835 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0103 19:19:06.299646  102835 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:19:06.299655  102835 command_runner.go:130] > # rdt_config_file = ""
	I0103 19:19:06.299664  102835 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0103 19:19:06.299670  102835 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0103 19:19:06.299682  102835 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0103 19:19:06.299689  102835 command_runner.go:130] > # separate_pull_cgroup = ""
	I0103 19:19:06.299701  102835 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0103 19:19:06.299710  102835 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0103 19:19:06.299719  102835 command_runner.go:130] > # will be added.
	I0103 19:19:06.299727  102835 command_runner.go:130] > # default_capabilities = [
	I0103 19:19:06.299734  102835 command_runner.go:130] > # 	"CHOWN",
	I0103 19:19:06.299741  102835 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0103 19:19:06.299752  102835 command_runner.go:130] > # 	"FSETID",
	I0103 19:19:06.299759  102835 command_runner.go:130] > # 	"FOWNER",
	I0103 19:19:06.299766  102835 command_runner.go:130] > # 	"SETGID",
	I0103 19:19:06.299772  102835 command_runner.go:130] > # 	"SETUID",
	I0103 19:19:06.299779  102835 command_runner.go:130] > # 	"SETPCAP",
	I0103 19:19:06.299784  102835 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0103 19:19:06.299789  102835 command_runner.go:130] > # 	"KILL",
	I0103 19:19:06.299803  102835 command_runner.go:130] > # ]
	I0103 19:19:06.299813  102835 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0103 19:19:06.299821  102835 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0103 19:19:06.299827  102835 command_runner.go:130] > # add_inheritable_capabilities = true
	I0103 19:19:06.299835  102835 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0103 19:19:06.299842  102835 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0103 19:19:06.299847  102835 command_runner.go:130] > # default_sysctls = [
	I0103 19:19:06.299851  102835 command_runner.go:130] > # ]
	I0103 19:19:06.299857  102835 command_runner.go:130] > # List of devices on the host that a
	I0103 19:19:06.299864  102835 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0103 19:19:06.299869  102835 command_runner.go:130] > # allowed_devices = [
	I0103 19:19:06.299874  102835 command_runner.go:130] > # 	"/dev/fuse",
	I0103 19:19:06.299878  102835 command_runner.go:130] > # ]
	I0103 19:19:06.299883  102835 command_runner.go:130] > # List of additional devices. specified as
	I0103 19:19:06.299937  102835 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0103 19:19:06.299948  102835 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0103 19:19:06.299958  102835 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0103 19:19:06.299966  102835 command_runner.go:130] > # additional_devices = [
	I0103 19:19:06.299975  102835 command_runner.go:130] > # ]
	I0103 19:19:06.299984  102835 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0103 19:19:06.299991  102835 command_runner.go:130] > # cdi_spec_dirs = [
	I0103 19:19:06.299996  102835 command_runner.go:130] > # 	"/etc/cdi",
	I0103 19:19:06.300002  102835 command_runner.go:130] > # 	"/var/run/cdi",
	I0103 19:19:06.300006  102835 command_runner.go:130] > # ]
	I0103 19:19:06.300012  102835 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0103 19:19:06.300018  102835 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0103 19:19:06.300022  102835 command_runner.go:130] > # Defaults to false.
	I0103 19:19:06.300027  102835 command_runner.go:130] > # device_ownership_from_security_context = false
	I0103 19:19:06.300033  102835 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0103 19:19:06.300038  102835 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0103 19:19:06.300042  102835 command_runner.go:130] > # hooks_dir = [
	I0103 19:19:06.300046  102835 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0103 19:19:06.300050  102835 command_runner.go:130] > # ]
	I0103 19:19:06.300055  102835 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0103 19:19:06.300061  102835 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0103 19:19:06.300066  102835 command_runner.go:130] > # its default mounts from the following two files:
	I0103 19:19:06.300071  102835 command_runner.go:130] > #
	I0103 19:19:06.300077  102835 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0103 19:19:06.300083  102835 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0103 19:19:06.300088  102835 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0103 19:19:06.300093  102835 command_runner.go:130] > #
	I0103 19:19:06.300099  102835 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0103 19:19:06.300104  102835 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0103 19:19:06.300111  102835 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0103 19:19:06.300115  102835 command_runner.go:130] > #      only add mounts it finds in this file.
	I0103 19:19:06.300118  102835 command_runner.go:130] > #
	I0103 19:19:06.300122  102835 command_runner.go:130] > # default_mounts_file = ""
	I0103 19:19:06.300127  102835 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0103 19:19:06.300133  102835 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0103 19:19:06.300137  102835 command_runner.go:130] > # pids_limit = 0
	I0103 19:19:06.300143  102835 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0103 19:19:06.300148  102835 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0103 19:19:06.300154  102835 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0103 19:19:06.300162  102835 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0103 19:19:06.300168  102835 command_runner.go:130] > # log_size_max = -1
	I0103 19:19:06.300175  102835 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0103 19:19:06.300179  102835 command_runner.go:130] > # log_to_journald = false
	I0103 19:19:06.300185  102835 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0103 19:19:06.300189  102835 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0103 19:19:06.300194  102835 command_runner.go:130] > # Path to directory for container attach sockets.
	I0103 19:19:06.300199  102835 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0103 19:19:06.300204  102835 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0103 19:19:06.300208  102835 command_runner.go:130] > # bind_mount_prefix = ""
	I0103 19:19:06.300213  102835 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0103 19:19:06.300217  102835 command_runner.go:130] > # read_only = false
	I0103 19:19:06.300223  102835 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0103 19:19:06.300229  102835 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0103 19:19:06.300233  102835 command_runner.go:130] > # live configuration reload.
	I0103 19:19:06.300236  102835 command_runner.go:130] > # log_level = "info"
	I0103 19:19:06.300242  102835 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0103 19:19:06.300249  102835 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:19:06.300253  102835 command_runner.go:130] > # log_filter = ""
	I0103 19:19:06.300261  102835 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0103 19:19:06.300266  102835 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0103 19:19:06.300270  102835 command_runner.go:130] > # separated by comma.
	I0103 19:19:06.300274  102835 command_runner.go:130] > # uid_mappings = ""
	I0103 19:19:06.300279  102835 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0103 19:19:06.300286  102835 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0103 19:19:06.300290  102835 command_runner.go:130] > # separated by comma.
	I0103 19:19:06.300293  102835 command_runner.go:130] > # gid_mappings = ""
	I0103 19:19:06.300299  102835 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0103 19:19:06.300305  102835 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0103 19:19:06.300310  102835 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0103 19:19:06.300314  102835 command_runner.go:130] > # minimum_mappable_uid = -1
	I0103 19:19:06.300320  102835 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0103 19:19:06.300326  102835 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0103 19:19:06.300332  102835 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0103 19:19:06.300336  102835 command_runner.go:130] > # minimum_mappable_gid = -1
	I0103 19:19:06.300344  102835 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0103 19:19:06.300350  102835 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0103 19:19:06.300361  102835 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0103 19:19:06.300365  102835 command_runner.go:130] > # ctr_stop_timeout = 30
	I0103 19:19:06.300371  102835 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0103 19:19:06.300378  102835 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0103 19:19:06.300382  102835 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0103 19:19:06.300387  102835 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0103 19:19:06.300391  102835 command_runner.go:130] > # drop_infra_ctr = true
	I0103 19:19:06.300396  102835 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0103 19:19:06.300401  102835 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0103 19:19:06.300408  102835 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0103 19:19:06.300412  102835 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0103 19:19:06.300418  102835 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0103 19:19:06.300422  102835 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0103 19:19:06.300426  102835 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0103 19:19:06.300433  102835 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0103 19:19:06.300436  102835 command_runner.go:130] > # pinns_path = ""
	I0103 19:19:06.300442  102835 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0103 19:19:06.300448  102835 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0103 19:19:06.300456  102835 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0103 19:19:06.300460  102835 command_runner.go:130] > # default_runtime = "runc"
	I0103 19:19:06.300464  102835 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0103 19:19:06.300471  102835 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0103 19:19:06.300481  102835 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0103 19:19:06.300486  102835 command_runner.go:130] > # creation as a file is not desired either.
	I0103 19:19:06.300493  102835 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0103 19:19:06.300498  102835 command_runner.go:130] > # the hostname is being managed dynamically.
	I0103 19:19:06.300502  102835 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0103 19:19:06.300505  102835 command_runner.go:130] > # ]
	I0103 19:19:06.300511  102835 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0103 19:19:06.300516  102835 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0103 19:19:06.300522  102835 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0103 19:19:06.300528  102835 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0103 19:19:06.300531  102835 command_runner.go:130] > #
	I0103 19:19:06.300535  102835 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0103 19:19:06.300540  102835 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0103 19:19:06.300543  102835 command_runner.go:130] > #  runtime_type = "oci"
	I0103 19:19:06.300550  102835 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0103 19:19:06.300554  102835 command_runner.go:130] > #  privileged_without_host_devices = false
	I0103 19:19:06.300563  102835 command_runner.go:130] > #  allowed_annotations = []
	I0103 19:19:06.300567  102835 command_runner.go:130] > # Where:
	I0103 19:19:06.300572  102835 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0103 19:19:06.300578  102835 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0103 19:19:06.300584  102835 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0103 19:19:06.300589  102835 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0103 19:19:06.300593  102835 command_runner.go:130] > #   in $PATH.
	I0103 19:19:06.300598  102835 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0103 19:19:06.300603  102835 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0103 19:19:06.300608  102835 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0103 19:19:06.300612  102835 command_runner.go:130] > #   state.
	I0103 19:19:06.300618  102835 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0103 19:19:06.300623  102835 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0103 19:19:06.300629  102835 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0103 19:19:06.300634  102835 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0103 19:19:06.300639  102835 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0103 19:19:06.300647  102835 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0103 19:19:06.300651  102835 command_runner.go:130] > #   The currently recognized values are:
	I0103 19:19:06.300657  102835 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0103 19:19:06.300665  102835 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0103 19:19:06.300671  102835 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0103 19:19:06.300676  102835 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0103 19:19:06.300683  102835 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0103 19:19:06.300689  102835 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0103 19:19:06.300695  102835 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0103 19:19:06.300701  102835 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0103 19:19:06.300706  102835 command_runner.go:130] > #   should be moved to the container's cgroup
	I0103 19:19:06.300710  102835 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0103 19:19:06.300714  102835 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0103 19:19:06.300718  102835 command_runner.go:130] > runtime_type = "oci"
	I0103 19:19:06.300722  102835 command_runner.go:130] > runtime_root = "/run/runc"
	I0103 19:19:06.300726  102835 command_runner.go:130] > runtime_config_path = ""
	I0103 19:19:06.300729  102835 command_runner.go:130] > monitor_path = ""
	I0103 19:19:06.300733  102835 command_runner.go:130] > monitor_cgroup = ""
	I0103 19:19:06.300739  102835 command_runner.go:130] > monitor_exec_cgroup = ""
	I0103 19:19:06.300785  102835 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0103 19:19:06.300789  102835 command_runner.go:130] > # running containers
	I0103 19:19:06.300793  102835 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0103 19:19:06.300798  102835 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0103 19:19:06.300805  102835 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0103 19:19:06.300810  102835 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0103 19:19:06.300814  102835 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0103 19:19:06.300819  102835 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0103 19:19:06.300823  102835 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0103 19:19:06.300827  102835 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0103 19:19:06.300832  102835 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0103 19:19:06.300836  102835 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0103 19:19:06.300842  102835 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0103 19:19:06.300847  102835 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0103 19:19:06.300852  102835 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0103 19:19:06.300859  102835 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0103 19:19:06.300866  102835 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0103 19:19:06.300874  102835 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0103 19:19:06.300882  102835 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0103 19:19:06.300891  102835 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0103 19:19:06.300896  102835 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0103 19:19:06.300903  102835 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0103 19:19:06.300906  102835 command_runner.go:130] > # Example:
	I0103 19:19:06.300910  102835 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0103 19:19:06.300915  102835 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0103 19:19:06.300919  102835 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0103 19:19:06.300924  102835 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0103 19:19:06.300928  102835 command_runner.go:130] > # cpuset = 0
	I0103 19:19:06.300931  102835 command_runner.go:130] > # cpushares = "0-1"
	I0103 19:19:06.300934  102835 command_runner.go:130] > # Where:
	I0103 19:19:06.300939  102835 command_runner.go:130] > # The workload name is workload-type.
	I0103 19:19:06.300945  102835 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0103 19:19:06.300950  102835 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0103 19:19:06.300955  102835 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0103 19:19:06.300962  102835 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0103 19:19:06.300973  102835 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0103 19:19:06.300976  102835 command_runner.go:130] > # 
	I0103 19:19:06.300982  102835 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0103 19:19:06.300985  102835 command_runner.go:130] > #
	I0103 19:19:06.300991  102835 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0103 19:19:06.300997  102835 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0103 19:19:06.301003  102835 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0103 19:19:06.301008  102835 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0103 19:19:06.301014  102835 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0103 19:19:06.301017  102835 command_runner.go:130] > [crio.image]
	I0103 19:19:06.301023  102835 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0103 19:19:06.301027  102835 command_runner.go:130] > # default_transport = "docker://"
	I0103 19:19:06.301032  102835 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0103 19:19:06.301038  102835 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0103 19:19:06.301042  102835 command_runner.go:130] > # global_auth_file = ""
	I0103 19:19:06.301047  102835 command_runner.go:130] > # The image used to instantiate infra containers.
	I0103 19:19:06.301051  102835 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:19:06.301056  102835 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0103 19:19:06.301064  102835 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0103 19:19:06.301071  102835 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0103 19:19:06.301077  102835 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:19:06.301081  102835 command_runner.go:130] > # pause_image_auth_file = ""
	I0103 19:19:06.301087  102835 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0103 19:19:06.301092  102835 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0103 19:19:06.301098  102835 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0103 19:19:06.301103  102835 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0103 19:19:06.301107  102835 command_runner.go:130] > # pause_command = "/pause"
	I0103 19:19:06.301113  102835 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0103 19:19:06.301118  102835 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0103 19:19:06.301124  102835 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0103 19:19:06.301130  102835 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0103 19:19:06.301134  102835 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0103 19:19:06.301138  102835 command_runner.go:130] > # signature_policy = ""
	I0103 19:19:06.301147  102835 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0103 19:19:06.301152  102835 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0103 19:19:06.301156  102835 command_runner.go:130] > # changing them here.
	I0103 19:19:06.301162  102835 command_runner.go:130] > # insecure_registries = [
	I0103 19:19:06.301166  102835 command_runner.go:130] > # ]
	I0103 19:19:06.301171  102835 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0103 19:19:06.301176  102835 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0103 19:19:06.301180  102835 command_runner.go:130] > # image_volumes = "mkdir"
	I0103 19:19:06.301185  102835 command_runner.go:130] > # Temporary directory to use for storing big files
	I0103 19:19:06.301189  102835 command_runner.go:130] > # big_files_temporary_dir = ""
	I0103 19:19:06.301195  102835 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0103 19:19:06.301198  102835 command_runner.go:130] > # CNI plugins.
	I0103 19:19:06.301202  102835 command_runner.go:130] > [crio.network]
	I0103 19:19:06.301209  102835 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0103 19:19:06.301214  102835 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0103 19:19:06.301218  102835 command_runner.go:130] > # cni_default_network = ""
	I0103 19:19:06.301223  102835 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0103 19:19:06.301228  102835 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0103 19:19:06.301233  102835 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0103 19:19:06.301236  102835 command_runner.go:130] > # plugin_dirs = [
	I0103 19:19:06.301240  102835 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0103 19:19:06.301245  102835 command_runner.go:130] > # ]
	I0103 19:19:06.301250  102835 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0103 19:19:06.301254  102835 command_runner.go:130] > [crio.metrics]
	I0103 19:19:06.301259  102835 command_runner.go:130] > # Globally enable or disable metrics support.
	I0103 19:19:06.301264  102835 command_runner.go:130] > # enable_metrics = false
	I0103 19:19:06.301268  102835 command_runner.go:130] > # Specify enabled metrics collectors.
	I0103 19:19:06.301272  102835 command_runner.go:130] > # Per default all metrics are enabled.
	I0103 19:19:06.301278  102835 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0103 19:19:06.301283  102835 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0103 19:19:06.301289  102835 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0103 19:19:06.301293  102835 command_runner.go:130] > # metrics_collectors = [
	I0103 19:19:06.301296  102835 command_runner.go:130] > # 	"operations",
	I0103 19:19:06.301300  102835 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0103 19:19:06.301305  102835 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0103 19:19:06.301309  102835 command_runner.go:130] > # 	"operations_errors",
	I0103 19:19:06.301312  102835 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0103 19:19:06.301316  102835 command_runner.go:130] > # 	"image_pulls_by_name",
	I0103 19:19:06.301320  102835 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0103 19:19:06.301327  102835 command_runner.go:130] > # 	"image_pulls_failures",
	I0103 19:19:06.301331  102835 command_runner.go:130] > # 	"image_pulls_successes",
	I0103 19:19:06.301335  102835 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0103 19:19:06.301338  102835 command_runner.go:130] > # 	"image_layer_reuse",
	I0103 19:19:06.301342  102835 command_runner.go:130] > # 	"containers_oom_total",
	I0103 19:19:06.301346  102835 command_runner.go:130] > # 	"containers_oom",
	I0103 19:19:06.301350  102835 command_runner.go:130] > # 	"processes_defunct",
	I0103 19:19:06.301353  102835 command_runner.go:130] > # 	"operations_total",
	I0103 19:19:06.301357  102835 command_runner.go:130] > # 	"operations_latency_seconds",
	I0103 19:19:06.301362  102835 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0103 19:19:06.301366  102835 command_runner.go:130] > # 	"operations_errors_total",
	I0103 19:19:06.301370  102835 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0103 19:19:06.301374  102835 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0103 19:19:06.301378  102835 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0103 19:19:06.301382  102835 command_runner.go:130] > # 	"image_pulls_success_total",
	I0103 19:19:06.301386  102835 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0103 19:19:06.301390  102835 command_runner.go:130] > # 	"containers_oom_count_total",
	I0103 19:19:06.301393  102835 command_runner.go:130] > # ]
	I0103 19:19:06.301400  102835 command_runner.go:130] > # The port on which the metrics server will listen.
	I0103 19:19:06.301403  102835 command_runner.go:130] > # metrics_port = 9090
	I0103 19:19:06.301408  102835 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0103 19:19:06.301412  102835 command_runner.go:130] > # metrics_socket = ""
	I0103 19:19:06.301417  102835 command_runner.go:130] > # The certificate for the secure metrics server.
	I0103 19:19:06.301422  102835 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0103 19:19:06.301428  102835 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0103 19:19:06.301432  102835 command_runner.go:130] > # certificate on any modification event.
	I0103 19:19:06.301436  102835 command_runner.go:130] > # metrics_cert = ""
	I0103 19:19:06.301441  102835 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0103 19:19:06.301445  102835 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0103 19:19:06.301449  102835 command_runner.go:130] > # metrics_key = ""
	I0103 19:19:06.301455  102835 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0103 19:19:06.301459  102835 command_runner.go:130] > [crio.tracing]
	I0103 19:19:06.301464  102835 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0103 19:19:06.301469  102835 command_runner.go:130] > # enable_tracing = false
	I0103 19:19:06.301477  102835 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0103 19:19:06.301484  102835 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0103 19:19:06.301494  102835 command_runner.go:130] > # Number of samples to collect per million spans.
	I0103 19:19:06.301499  102835 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0103 19:19:06.301504  102835 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0103 19:19:06.301508  102835 command_runner.go:130] > [crio.stats]
	I0103 19:19:06.301513  102835 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0103 19:19:06.301518  102835 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0103 19:19:06.301522  102835 command_runner.go:130] > # stats_collection_period = 0
	I0103 19:19:06.301907  102835 command_runner.go:130] ! time="2024-01-03 19:19:06.296197142Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0103 19:19:06.301926  102835 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0103 19:19:06.302005  102835 cni.go:84] Creating CNI manager for ""
	I0103 19:19:06.302021  102835 cni.go:136] 1 nodes found, recommending kindnet
	I0103 19:19:06.302042  102835 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 19:19:06.302069  102835 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-867906 NodeName:multinode-867906 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 19:19:06.302233  102835 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-867906"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 19:19:06.302291  102835 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-867906 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-867906 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 19:19:06.302336  102835 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 19:19:06.310096  102835 command_runner.go:130] > kubeadm
	I0103 19:19:06.310124  102835 command_runner.go:130] > kubectl
	I0103 19:19:06.310130  102835 command_runner.go:130] > kubelet
	I0103 19:19:06.310163  102835 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 19:19:06.310213  102835 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 19:19:06.317872  102835 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0103 19:19:06.332977  102835 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 19:19:06.348416  102835 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0103 19:19:06.363716  102835 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0103 19:19:06.366860  102835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 19:19:06.375977  102835 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906 for IP: 192.168.58.2
	I0103 19:19:06.376002  102835 certs.go:190] acquiring lock for shared ca certs: {Name:mk5aa238e4284ee43cf20f760a8d5a161bd1dece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:19:06.376135  102835 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-8915/.minikube/ca.key
	I0103 19:19:06.376181  102835 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-8915/.minikube/proxy-client-ca.key
	I0103 19:19:06.376233  102835 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/client.key
	I0103 19:19:06.376257  102835 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/client.crt with IP's: []
	I0103 19:19:06.556272  102835 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/client.crt ...
	I0103 19:19:06.556305  102835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/client.crt: {Name:mkb6b93cfbcc1b90b443ceab27dfbe7a3db79717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:19:06.556467  102835 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/client.key ...
	I0103 19:19:06.556477  102835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/client.key: {Name:mkcdf1218647a7895fec9e6ca10cef1ce462cfdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:19:06.556541  102835 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/apiserver.key.cee25041
	I0103 19:19:06.556554  102835 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0103 19:19:06.704348  102835 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/apiserver.crt.cee25041 ...
	I0103 19:19:06.704379  102835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/apiserver.crt.cee25041: {Name:mk161f959c5352b3f9aaba1d444838be6971bab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:19:06.704523  102835 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/apiserver.key.cee25041 ...
	I0103 19:19:06.704536  102835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/apiserver.key.cee25041: {Name:mkb7ef07523b3f1b33afe7875dd75f9faa231515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:19:06.704595  102835 certs.go:337] copying /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/apiserver.crt
	I0103 19:19:06.704673  102835 certs.go:341] copying /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/apiserver.key
	I0103 19:19:06.704730  102835 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/proxy-client.key
	I0103 19:19:06.704743  102835 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/proxy-client.crt with IP's: []
	I0103 19:19:06.924546  102835 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/proxy-client.crt ...
	I0103 19:19:06.924588  102835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/proxy-client.crt: {Name:mka01b5fdd8415a8fe62d0fc129e352529299da2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:19:06.924778  102835 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/proxy-client.key ...
	I0103 19:19:06.924797  102835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/proxy-client.key: {Name:mk60543d4b49f91c13f78442e7d91df4e46d16e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:19:06.924893  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0103 19:19:06.924918  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0103 19:19:06.924933  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0103 19:19:06.924948  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0103 19:19:06.924966  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0103 19:19:06.924985  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0103 19:19:06.925004  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0103 19:19:06.925024  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0103 19:19:06.925097  102835 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/home/jenkins/minikube-integration/17885-8915/.minikube/certs/15670.pem (1338 bytes)
	W0103 19:19:06.925146  102835 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-8915/.minikube/certs/home/jenkins/minikube-integration/17885-8915/.minikube/certs/15670_empty.pem, impossibly tiny 0 bytes
	I0103 19:19:06.925164  102835 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 19:19:06.925204  102835 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem (1078 bytes)
	I0103 19:19:06.925241  102835 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/home/jenkins/minikube-integration/17885-8915/.minikube/certs/cert.pem (1123 bytes)
	I0103 19:19:06.925275  102835 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/home/jenkins/minikube-integration/17885-8915/.minikube/certs/key.pem (1679 bytes)
	I0103 19:19:06.925331  102835 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/ssl/certs/156702.pem (1708 bytes)
	I0103 19:19:06.925372  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:19:06.925394  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/15670.pem -> /usr/share/ca-certificates/15670.pem
	I0103 19:19:06.925412  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/ssl/certs/156702.pem -> /usr/share/ca-certificates/156702.pem
	I0103 19:19:06.925970  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 19:19:06.947751  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 19:19:06.970106  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 19:19:06.991432  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 19:19:07.012366  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 19:19:07.032517  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0103 19:19:07.052302  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 19:19:07.072786  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0103 19:19:07.092908  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 19:19:07.113181  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/certs/15670.pem --> /usr/share/ca-certificates/15670.pem (1338 bytes)
	I0103 19:19:07.133622  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/ssl/certs/156702.pem --> /usr/share/ca-certificates/156702.pem (1708 bytes)
	I0103 19:19:07.153827  102835 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 19:19:07.169258  102835 ssh_runner.go:195] Run: openssl version
	I0103 19:19:07.174474  102835 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0103 19:19:07.174554  102835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 19:19:07.183055  102835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:19:07.186339  102835 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  3 18:59 /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:19:07.186372  102835 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:59 /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:19:07.186436  102835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:19:07.192404  102835 command_runner.go:130] > b5213941
	I0103 19:19:07.192642  102835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 19:19:07.201102  102835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15670.pem && ln -fs /usr/share/ca-certificates/15670.pem /etc/ssl/certs/15670.pem"
	I0103 19:19:07.209294  102835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15670.pem
	I0103 19:19:07.212468  102835 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  3 19:05 /usr/share/ca-certificates/15670.pem
	I0103 19:19:07.212492  102835 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:05 /usr/share/ca-certificates/15670.pem
	I0103 19:19:07.212525  102835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15670.pem
	I0103 19:19:07.218416  102835 command_runner.go:130] > 51391683
	I0103 19:19:07.218616  102835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15670.pem /etc/ssl/certs/51391683.0"
	I0103 19:19:07.227072  102835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/156702.pem && ln -fs /usr/share/ca-certificates/156702.pem /etc/ssl/certs/156702.pem"
	I0103 19:19:07.235434  102835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/156702.pem
	I0103 19:19:07.238512  102835 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  3 19:05 /usr/share/ca-certificates/156702.pem
	I0103 19:19:07.238550  102835 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:05 /usr/share/ca-certificates/156702.pem
	I0103 19:19:07.238586  102835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/156702.pem
	I0103 19:19:07.244504  102835 command_runner.go:130] > 3ec20f2e
	I0103 19:19:07.244692  102835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/156702.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 19:19:07.253331  102835 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 19:19:07.256366  102835 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 19:19:07.256412  102835 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 19:19:07.256451  102835 kubeadm.go:404] StartCluster: {Name:multinode-867906 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-867906 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:19:07.256529  102835 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 19:19:07.256575  102835 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 19:19:07.288787  102835 cri.go:89] found id: ""
	I0103 19:19:07.288844  102835 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 19:19:07.296233  102835 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0103 19:19:07.296259  102835 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0103 19:19:07.296266  102835 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0103 19:19:07.297035  102835 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 19:19:07.304674  102835 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0103 19:19:07.304726  102835 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 19:19:07.312085  102835 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0103 19:19:07.312105  102835 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0103 19:19:07.312112  102835 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0103 19:19:07.312122  102835 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 19:19:07.312156  102835 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 19:19:07.312185  102835 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0103 19:19:07.354320  102835 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0103 19:19:07.354373  102835 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0103 19:19:07.354425  102835 kubeadm.go:322] [preflight] Running pre-flight checks
	I0103 19:19:07.354434  102835 command_runner.go:130] > [preflight] Running pre-flight checks
	I0103 19:19:07.387488  102835 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0103 19:19:07.387514  102835 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0103 19:19:07.387584  102835 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I0103 19:19:07.387594  102835 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-gcp
	I0103 19:19:07.387621  102835 kubeadm.go:322] OS: Linux
	I0103 19:19:07.387627  102835 command_runner.go:130] > OS: Linux
	I0103 19:19:07.387664  102835 kubeadm.go:322] CGROUPS_CPU: enabled
	I0103 19:19:07.387672  102835 command_runner.go:130] > CGROUPS_CPU: enabled
	I0103 19:19:07.387747  102835 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0103 19:19:07.387776  102835 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0103 19:19:07.387844  102835 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0103 19:19:07.387864  102835 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0103 19:19:07.387913  102835 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0103 19:19:07.387920  102835 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0103 19:19:07.387975  102835 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0103 19:19:07.387981  102835 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0103 19:19:07.388044  102835 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0103 19:19:07.388052  102835 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0103 19:19:07.388092  102835 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0103 19:19:07.388098  102835 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0103 19:19:07.388170  102835 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0103 19:19:07.388183  102835 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0103 19:19:07.388254  102835 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0103 19:19:07.388266  102835 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0103 19:19:07.452106  102835 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 19:19:07.452132  102835 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 19:19:07.452217  102835 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 19:19:07.452229  102835 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 19:19:07.452320  102835 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 19:19:07.452329  102835 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 19:19:07.642942  102835 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 19:19:07.647296  102835 out.go:204]   - Generating certificates and keys ...
	I0103 19:19:07.642978  102835 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 19:19:07.647396  102835 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0103 19:19:07.647409  102835 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0103 19:19:07.647474  102835 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0103 19:19:07.647490  102835 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0103 19:19:07.834938  102835 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0103 19:19:07.834962  102835 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0103 19:19:07.896867  102835 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0103 19:19:07.896899  102835 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0103 19:19:08.072223  102835 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0103 19:19:08.072275  102835 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0103 19:19:08.255839  102835 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0103 19:19:08.255870  102835 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0103 19:19:08.362686  102835 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0103 19:19:08.362726  102835 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0103 19:19:08.362897  102835 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-867906] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0103 19:19:08.362911  102835 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-867906] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0103 19:19:08.438796  102835 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0103 19:19:08.438826  102835 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0103 19:19:08.439013  102835 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-867906] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0103 19:19:08.439048  102835 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-867906] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0103 19:19:08.516084  102835 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0103 19:19:08.516130  102835 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0103 19:19:08.703622  102835 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0103 19:19:08.703649  102835 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0103 19:19:08.777437  102835 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0103 19:19:08.777466  102835 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0103 19:19:08.777555  102835 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 19:19:08.777571  102835 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 19:19:08.964644  102835 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 19:19:08.964667  102835 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 19:19:09.141751  102835 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 19:19:09.141785  102835 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 19:19:09.265406  102835 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 19:19:09.265443  102835 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 19:19:09.461234  102835 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 19:19:09.461265  102835 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 19:19:09.461753  102835 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 19:19:09.461770  102835 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 19:19:09.464978  102835 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 19:19:09.467303  102835 out.go:204]   - Booting up control plane ...
	I0103 19:19:09.465027  102835 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 19:19:09.467436  102835 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 19:19:09.467453  102835 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 19:19:09.467545  102835 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 19:19:09.467556  102835 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 19:19:09.467644  102835 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 19:19:09.467654  102835 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 19:19:09.475328  102835 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 19:19:09.475356  102835 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 19:19:09.476071  102835 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 19:19:09.476097  102835 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 19:19:09.476161  102835 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0103 19:19:09.476185  102835 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0103 19:19:09.561617  102835 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0103 19:19:09.561656  102835 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0103 19:19:14.563328  102835 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.001828 seconds
	I0103 19:19:14.563334  102835 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.001828 seconds
	I0103 19:19:14.563473  102835 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0103 19:19:14.563484  102835 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0103 19:19:14.576920  102835 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0103 19:19:14.576946  102835 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0103 19:19:15.096318  102835 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0103 19:19:15.096351  102835 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0103 19:19:15.096603  102835 kubeadm.go:322] [mark-control-plane] Marking the node multinode-867906 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0103 19:19:15.096617  102835 command_runner.go:130] > [mark-control-plane] Marking the node multinode-867906 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0103 19:19:15.606241  102835 kubeadm.go:322] [bootstrap-token] Using token: p912aa.9kavpliedsv5uzhc
	I0103 19:19:15.606266  102835 command_runner.go:130] > [bootstrap-token] Using token: p912aa.9kavpliedsv5uzhc
	I0103 19:19:15.608007  102835 out.go:204]   - Configuring RBAC rules ...
	I0103 19:19:15.608143  102835 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0103 19:19:15.608159  102835 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0103 19:19:15.612770  102835 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0103 19:19:15.612788  102835 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0103 19:19:15.619230  102835 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0103 19:19:15.619264  102835 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0103 19:19:15.624344  102835 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0103 19:19:15.624367  102835 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0103 19:19:15.626904  102835 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0103 19:19:15.626927  102835 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0103 19:19:15.629338  102835 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0103 19:19:15.629363  102835 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0103 19:19:15.639386  102835 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0103 19:19:15.639404  102835 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0103 19:19:15.840413  102835 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0103 19:19:15.840440  102835 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0103 19:19:16.017136  102835 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0103 19:19:16.017174  102835 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0103 19:19:16.018086  102835 kubeadm.go:322] 
	I0103 19:19:16.018196  102835 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0103 19:19:16.018234  102835 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0103 19:19:16.018267  102835 kubeadm.go:322] 
	I0103 19:19:16.018380  102835 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0103 19:19:16.018407  102835 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0103 19:19:16.018415  102835 kubeadm.go:322] 
	I0103 19:19:16.018463  102835 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0103 19:19:16.018473  102835 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0103 19:19:16.018583  102835 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0103 19:19:16.018599  102835 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0103 19:19:16.018661  102835 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0103 19:19:16.018673  102835 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0103 19:19:16.018681  102835 kubeadm.go:322] 
	I0103 19:19:16.018758  102835 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0103 19:19:16.018766  102835 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0103 19:19:16.018782  102835 kubeadm.go:322] 
	I0103 19:19:16.018849  102835 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0103 19:19:16.018859  102835 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0103 19:19:16.018864  102835 kubeadm.go:322] 
	I0103 19:19:16.018928  102835 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0103 19:19:16.018939  102835 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0103 19:19:16.019024  102835 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0103 19:19:16.019056  102835 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0103 19:19:16.019151  102835 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0103 19:19:16.019162  102835 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0103 19:19:16.019168  102835 kubeadm.go:322] 
	I0103 19:19:16.019274  102835 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0103 19:19:16.019283  102835 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0103 19:19:16.019377  102835 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0103 19:19:16.019389  102835 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0103 19:19:16.019395  102835 kubeadm.go:322] 
	I0103 19:19:16.019502  102835 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token p912aa.9kavpliedsv5uzhc \
	I0103 19:19:16.019513  102835 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token p912aa.9kavpliedsv5uzhc \
	I0103 19:19:16.019641  102835 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:6fada62843f74bbefe3d0de7ede254ffc49257c2300ab57b02a33590c9b388a1 \
	I0103 19:19:16.019651  102835 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6fada62843f74bbefe3d0de7ede254ffc49257c2300ab57b02a33590c9b388a1 \
	I0103 19:19:16.019679  102835 command_runner.go:130] > 	--control-plane 
	I0103 19:19:16.019689  102835 kubeadm.go:322] 	--control-plane 
	I0103 19:19:16.019695  102835 kubeadm.go:322] 
	I0103 19:19:16.019802  102835 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0103 19:19:16.019810  102835 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0103 19:19:16.019819  102835 kubeadm.go:322] 
	I0103 19:19:16.019923  102835 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token p912aa.9kavpliedsv5uzhc \
	I0103 19:19:16.019933  102835 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token p912aa.9kavpliedsv5uzhc \
	I0103 19:19:16.020076  102835 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:6fada62843f74bbefe3d0de7ede254ffc49257c2300ab57b02a33590c9b388a1 
	I0103 19:19:16.020088  102835 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6fada62843f74bbefe3d0de7ede254ffc49257c2300ab57b02a33590c9b388a1 
	I0103 19:19:16.022068  102835 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I0103 19:19:16.022090  102835 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I0103 19:19:16.022263  102835 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 19:19:16.022280  102835 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 19:19:16.022312  102835 cni.go:84] Creating CNI manager for ""
	I0103 19:19:16.022324  102835 cni.go:136] 1 nodes found, recommending kindnet
	I0103 19:19:16.024467  102835 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0103 19:19:16.026047  102835 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0103 19:19:16.029691  102835 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0103 19:19:16.029726  102835 command_runner.go:130] >   Size: 4085020   	Blocks: 7992       IO Block: 4096   regular file
	I0103 19:19:16.029735  102835 command_runner.go:130] > Device: 34h/52d	Inode: 582508      Links: 1
	I0103 19:19:16.029741  102835 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0103 19:19:16.029751  102835 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I0103 19:19:16.029758  102835 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I0103 19:19:16.029764  102835 command_runner.go:130] > Change: 2024-01-03 18:59:22.202270685 +0000
	I0103 19:19:16.029771  102835 command_runner.go:130] >  Birth: 2024-01-03 18:59:22.178268890 +0000
	I0103 19:19:16.029813  102835 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0103 19:19:16.029823  102835 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0103 19:19:16.045799  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0103 19:19:16.652660  102835 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0103 19:19:16.657801  102835 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0103 19:19:16.664143  102835 command_runner.go:130] > serviceaccount/kindnet created
	I0103 19:19:16.675771  102835 command_runner.go:130] > daemonset.apps/kindnet created
	I0103 19:19:16.679604  102835 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 19:19:16.679652  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:16.679677  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a minikube.k8s.io/name=multinode-867906 minikube.k8s.io/updated_at=2024_01_03T19_19_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:16.686960  102835 command_runner.go:130] > -16
	I0103 19:19:16.687005  102835 ops.go:34] apiserver oom_adj: -16
	I0103 19:19:16.774598  102835 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0103 19:19:16.774717  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:16.784371  102835 command_runner.go:130] > node/multinode-867906 labeled
	I0103 19:19:16.838716  102835 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:19:17.275347  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:17.338265  102835 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:19:17.774818  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:17.838978  102835 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:19:18.275649  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:18.337625  102835 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:19:18.775183  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:18.837274  102835 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:19:19.274932  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:19.336597  102835 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:19:19.775312  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:19.835720  102835 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:19:20.274956  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:20.336868  102835 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:19:20.774942  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:20.838608  102835 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:19:21.275187  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:21.338127  102835 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:19:21.775330  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:21.836875  102835 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:19:22.274788  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:22.339015  102835 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:19:22.775353  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:22.836833  102835 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:19:23.274910  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:23.340534  102835 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:19:23.775095  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:23.837516  102835 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:19:24.274906  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:24.337071  102835 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:19:24.775184  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:24.835785  102835 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:19:25.275371  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:25.337480  102835 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:19:25.775688  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:25.837524  102835 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:19:26.275152  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:26.338944  102835 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:19:26.775498  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:26.840400  102835 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:19:27.274960  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:27.339225  102835 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:19:27.774740  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:27.838121  102835 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:19:28.275376  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:28.339441  102835 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:19:28.775047  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:28.836583  102835 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 19:19:29.275714  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:19:29.414593  102835 command_runner.go:130] > NAME      SECRETS   AGE
	I0103 19:19:29.414611  102835 command_runner.go:130] > default   0         0s
	I0103 19:19:29.417176  102835 kubeadm.go:1088] duration metric: took 12.737576481s to wait for elevateKubeSystemPrivileges.
	I0103 19:19:29.417210  102835 kubeadm.go:406] StartCluster complete in 22.160762378s
	I0103 19:19:29.417232  102835 settings.go:142] acquiring lock: {Name:mk6273be8cd3d06b021992a8bd25ebbd6366b42d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:19:29.417308  102835 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-8915/kubeconfig
	I0103 19:19:29.417937  102835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/kubeconfig: {Name:mke772e93691b15e3e729ce43b6e84f73895395b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:19:29.418167  102835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 19:19:29.418204  102835 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 19:19:29.418279  102835 addons.go:69] Setting storage-provisioner=true in profile "multinode-867906"
	I0103 19:19:29.418336  102835 addons.go:237] Setting addon storage-provisioner=true in "multinode-867906"
	I0103 19:19:29.418394  102835 host.go:66] Checking if "multinode-867906" exists ...
	I0103 19:19:29.418396  102835 config.go:182] Loaded profile config "multinode-867906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:19:29.418294  102835 addons.go:69] Setting default-storageclass=true in profile "multinode-867906"
	I0103 19:19:29.418445  102835 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-867906"
	I0103 19:19:29.418514  102835 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17885-8915/kubeconfig
	I0103 19:19:29.418774  102835 cli_runner.go:164] Run: docker container inspect multinode-867906 --format={{.State.Status}}
	I0103 19:19:29.418955  102835 cli_runner.go:164] Run: docker container inspect multinode-867906 --format={{.State.Status}}
	I0103 19:19:29.418859  102835 kapi.go:59] client config for multinode-867906: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/client.key", CAFile:"/home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:19:29.419667  102835 cert_rotation.go:137] Starting client certificate rotation controller
	I0103 19:19:29.419839  102835 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0103 19:19:29.419851  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:29.419859  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:29.419865  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:29.428836  102835 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0103 19:19:29.428858  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:29.428869  102835 round_trippers.go:580]     Audit-Id: ea84f821-1698-4b53-8477-9862f867712f
	I0103 19:19:29.428879  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:29.428888  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:29.428900  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:29.428909  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:29.428920  102835 round_trippers.go:580]     Content-Length: 291
	I0103 19:19:29.428929  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:29 GMT
	I0103 19:19:29.428961  102835 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c2d06393-ba6f-4103-beba-76fece3a20fb","resourceVersion":"225","creationTimestamp":"2024-01-03T19:19:15Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0103 19:19:29.429294  102835 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c2d06393-ba6f-4103-beba-76fece3a20fb","resourceVersion":"225","creationTimestamp":"2024-01-03T19:19:15Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0103 19:19:29.429345  102835 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0103 19:19:29.429356  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:29.429366  102835 round_trippers.go:473]     Content-Type: application/json
	I0103 19:19:29.429375  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:29.429383  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:29.441211  102835 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 19:19:29.439887  102835 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17885-8915/kubeconfig
	I0103 19:19:29.442488  102835 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 19:19:29.442506  102835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 19:19:29.442556  102835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-867906
	I0103 19:19:29.442667  102835 kapi.go:59] client config for multinode-867906: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/client.key", CAFile:"/home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:19:29.442963  102835 addons.go:237] Setting addon default-storageclass=true in "multinode-867906"
	I0103 19:19:29.442994  102835 host.go:66] Checking if "multinode-867906" exists ...
	I0103 19:19:29.443319  102835 cli_runner.go:164] Run: docker container inspect multinode-867906 --format={{.State.Status}}
	I0103 19:19:29.459077  102835 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 19:19:29.459101  102835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 19:19:29.459152  102835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-867906
	I0103 19:19:29.459832  102835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/multinode-867906/id_rsa Username:docker}
	I0103 19:19:29.474771  102835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/multinode-867906/id_rsa Username:docker}
	I0103 19:19:29.475959  102835 round_trippers.go:574] Response Status: 200 OK in 46 milliseconds
	I0103 19:19:29.475983  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:29.475993  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:29.476002  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:29.476011  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:29.476023  102835 round_trippers.go:580]     Content-Length: 291
	I0103 19:19:29.476031  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:29 GMT
	I0103 19:19:29.476041  102835 round_trippers.go:580]     Audit-Id: d7461f82-30c3-41ad-b390-823ee306bcbf
	I0103 19:19:29.476049  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:29.476088  102835 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c2d06393-ba6f-4103-beba-76fece3a20fb","resourceVersion":"329","creationTimestamp":"2024-01-03T19:19:15Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0103 19:19:29.589129  102835 command_runner.go:130] > apiVersion: v1
	I0103 19:19:29.589157  102835 command_runner.go:130] > data:
	I0103 19:19:29.589165  102835 command_runner.go:130] >   Corefile: |
	I0103 19:19:29.589173  102835 command_runner.go:130] >     .:53 {
	I0103 19:19:29.589180  102835 command_runner.go:130] >         errors
	I0103 19:19:29.589187  102835 command_runner.go:130] >         health {
	I0103 19:19:29.589194  102835 command_runner.go:130] >            lameduck 5s
	I0103 19:19:29.589200  102835 command_runner.go:130] >         }
	I0103 19:19:29.589206  102835 command_runner.go:130] >         ready
	I0103 19:19:29.589215  102835 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0103 19:19:29.589230  102835 command_runner.go:130] >            pods insecure
	I0103 19:19:29.589246  102835 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0103 19:19:29.589260  102835 command_runner.go:130] >            ttl 30
	I0103 19:19:29.589271  102835 command_runner.go:130] >         }
	I0103 19:19:29.589279  102835 command_runner.go:130] >         prometheus :9153
	I0103 19:19:29.589288  102835 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0103 19:19:29.589297  102835 command_runner.go:130] >            max_concurrent 1000
	I0103 19:19:29.589306  102835 command_runner.go:130] >         }
	I0103 19:19:29.589313  102835 command_runner.go:130] >         cache 30
	I0103 19:19:29.589318  102835 command_runner.go:130] >         loop
	I0103 19:19:29.589322  102835 command_runner.go:130] >         reload
	I0103 19:19:29.589332  102835 command_runner.go:130] >         loadbalance
	I0103 19:19:29.589339  102835 command_runner.go:130] >     }
	I0103 19:19:29.589343  102835 command_runner.go:130] > kind: ConfigMap
	I0103 19:19:29.589349  102835 command_runner.go:130] > metadata:
	I0103 19:19:29.589358  102835 command_runner.go:130] >   creationTimestamp: "2024-01-03T19:19:15Z"
	I0103 19:19:29.589367  102835 command_runner.go:130] >   name: coredns
	I0103 19:19:29.589377  102835 command_runner.go:130] >   namespace: kube-system
	I0103 19:19:29.589388  102835 command_runner.go:130] >   resourceVersion: "221"
	I0103 19:19:29.589398  102835 command_runner.go:130] >   uid: 969c85c7-202b-4ad0-80ab-a6ed389828ad
	I0103 19:19:29.589572  102835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0103 19:19:29.590693  102835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 19:19:29.593716  102835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 19:19:29.920527  102835 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0103 19:19:29.920548  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:29.920559  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:29.920570  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:29.977635  102835 round_trippers.go:574] Response Status: 200 OK in 57 milliseconds
	I0103 19:19:29.977661  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:29.977671  102835 round_trippers.go:580]     Content-Length: 291
	I0103 19:19:29.977679  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:29 GMT
	I0103 19:19:29.977687  102835 round_trippers.go:580]     Audit-Id: 3dbb5be5-0ac8-43df-b7b0-cb59ccb027a7
	I0103 19:19:29.977699  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:29.977707  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:29.977714  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:29.977721  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:29.978066  102835 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c2d06393-ba6f-4103-beba-76fece3a20fb","resourceVersion":"354","creationTimestamp":"2024-01-03T19:19:15Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0103 19:19:29.978204  102835 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-867906" context rescaled to 1 replicas
	I0103 19:19:29.978234  102835 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 19:19:29.981499  102835 out.go:177] * Verifying Kubernetes components...
	I0103 19:19:29.982789  102835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:19:30.375044  102835 command_runner.go:130] > configmap/coredns replaced
	I0103 19:19:30.380583  102835 start.go:929] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0103 19:19:30.481278  102835 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0103 19:19:30.487141  102835 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0103 19:19:30.493969  102835 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0103 19:19:30.501360  102835 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0103 19:19:30.507164  102835 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0103 19:19:30.517330  102835 command_runner.go:130] > pod/storage-provisioner created
	I0103 19:19:30.521763  102835 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0103 19:19:30.521890  102835 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0103 19:19:30.521905  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:30.521916  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:30.521927  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:30.522307  102835 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17885-8915/kubeconfig
	I0103 19:19:30.522624  102835 kapi.go:59] client config for multinode-867906: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/client.key", CAFile:"/home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:19:30.522890  102835 node_ready.go:35] waiting up to 6m0s for node "multinode-867906" to be "Ready" ...
	I0103 19:19:30.522963  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:30.522970  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:30.522981  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:30.522990  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:30.523729  102835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:19:30.523747  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:30.523756  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:30.523762  102835 round_trippers.go:580]     Content-Length: 1273
	I0103 19:19:30.523767  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:30 GMT
	I0103 19:19:30.523772  102835 round_trippers.go:580]     Audit-Id: 54163472-2419-4dc4-ae2d-5578530fe507
	I0103 19:19:30.523778  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:30.523785  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:30.523791  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:30.523815  102835 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"368"},"items":[{"metadata":{"name":"standard","uid":"4175aa9a-d21c-415c-aa2a-7bbac0f4bbc4","resourceVersion":"359","creationTimestamp":"2024-01-03T19:19:30Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-03T19:19:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0103 19:19:30.524220  102835 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"4175aa9a-d21c-415c-aa2a-7bbac0f4bbc4","resourceVersion":"359","creationTimestamp":"2024-01-03T19:19:30Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-03T19:19:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0103 19:19:30.524277  102835 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0103 19:19:30.524289  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:30.524299  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:30.524312  102835 round_trippers.go:473]     Content-Type: application/json
	I0103 19:19:30.524326  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:30.524801  102835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:19:30.524818  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:30.524828  102835 round_trippers.go:580]     Audit-Id: 2102d177-1ab0-436d-88cb-c7c5afdcf66a
	I0103 19:19:30.524837  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:30.524846  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:30.524858  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:30.524878  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:30.524890  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:30 GMT
	I0103 19:19:30.525032  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:30.526580  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:30.526598  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:30.526609  102835 round_trippers.go:580]     Audit-Id: c24cb58c-e55e-4df7-b68a-d09d8d64ec92
	I0103 19:19:30.526619  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:30.526629  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:30.526637  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:30.526642  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:30.526650  102835 round_trippers.go:580]     Content-Length: 1220
	I0103 19:19:30.526655  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:30 GMT
	I0103 19:19:30.526692  102835 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"4175aa9a-d21c-415c-aa2a-7bbac0f4bbc4","resourceVersion":"359","creationTimestamp":"2024-01-03T19:19:30Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-03T19:19:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0103 19:19:30.528664  102835 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0103 19:19:30.529889  102835 addons.go:508] enable addons completed in 1.1116866s: enabled=[storage-provisioner default-storageclass]
	I0103 19:19:31.023169  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:31.023204  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:31.023214  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:31.023220  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:31.025045  102835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:19:31.025069  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:31.025080  102835 round_trippers.go:580]     Audit-Id: 158f7a90-e627-47ab-9122-002ddd7d4375
	I0103 19:19:31.025089  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:31.025097  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:31.025106  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:31.025122  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:31.025130  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:31 GMT
	I0103 19:19:31.025254  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:31.523369  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:31.523396  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:31.523404  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:31.523410  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:31.525674  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:31.525701  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:31.525708  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:31.525714  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:31 GMT
	I0103 19:19:31.525719  102835 round_trippers.go:580]     Audit-Id: 70796b36-0fa9-4ad5-9696-c57bb0a97401
	I0103 19:19:31.525724  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:31.525729  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:31.525736  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:31.525945  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:32.023506  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:32.023528  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:32.023536  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:32.023542  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:32.025670  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:32.025688  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:32.025695  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:32.025700  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:32.025705  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:32.025711  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:32 GMT
	I0103 19:19:32.025716  102835 round_trippers.go:580]     Audit-Id: 77402bec-64ec-41f4-a73d-b5a6a9525230
	I0103 19:19:32.025723  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:32.025892  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:32.523432  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:32.523461  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:32.523473  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:32.523483  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:32.525707  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:32.525731  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:32.525740  102835 round_trippers.go:580]     Audit-Id: 67d3b528-e707-4381-bf60-abd3e68baf21
	I0103 19:19:32.525749  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:32.525757  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:32.525770  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:32.525780  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:32.525789  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:32 GMT
	I0103 19:19:32.525911  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:32.526238  102835 node_ready.go:58] node "multinode-867906" has status "Ready":"False"
	I0103 19:19:33.023468  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:33.023488  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:33.023496  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:33.023502  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:33.025769  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:33.025789  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:33.025796  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:33.025804  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:33.025813  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:33.025822  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:33 GMT
	I0103 19:19:33.025830  102835 round_trippers.go:580]     Audit-Id: 91b047f1-9684-4e21-874d-d4626fb8a8a3
	I0103 19:19:33.025843  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:33.025977  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:33.523544  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:33.523575  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:33.523584  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:33.523590  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:33.526035  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:33.526059  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:33.526067  102835 round_trippers.go:580]     Audit-Id: c4e9a813-a9c8-4b33-bf6d-7fa75225cb83
	I0103 19:19:33.526072  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:33.526079  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:33.526084  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:33.526091  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:33.526100  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:33 GMT
	I0103 19:19:33.526272  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:34.023646  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:34.023670  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:34.023681  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:34.023690  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:34.025910  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:34.025934  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:34.025945  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:34 GMT
	I0103 19:19:34.025954  102835 round_trippers.go:580]     Audit-Id: 98815ebf-8f05-409e-9a15-476fd3f75e0e
	I0103 19:19:34.025963  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:34.025968  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:34.025973  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:34.025980  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:34.026080  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:34.523712  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:34.523740  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:34.523750  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:34.523758  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:34.525853  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:34.525873  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:34.525879  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:34.525885  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:34.525895  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:34.525900  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:34 GMT
	I0103 19:19:34.525905  102835 round_trippers.go:580]     Audit-Id: 25fca78e-dfa8-436c-828e-a353bb9958cb
	I0103 19:19:34.525912  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:34.526038  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:34.526445  102835 node_ready.go:58] node "multinode-867906" has status "Ready":"False"
	I0103 19:19:35.023545  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:35.023567  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:35.023576  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:35.023582  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:35.025729  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:35.025753  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:35.025763  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:35.025773  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:35.025781  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:35.025789  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:35.025801  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:35 GMT
	I0103 19:19:35.025816  102835 round_trippers.go:580]     Audit-Id: 0bd72a8d-348e-4d94-bc39-9c2aba362312
	I0103 19:19:35.025942  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:35.523518  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:35.523541  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:35.523548  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:35.523554  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:35.525849  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:35.525874  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:35.525884  102835 round_trippers.go:580]     Audit-Id: dd372aaa-901a-4a3d-8bbe-1843e648518a
	I0103 19:19:35.525891  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:35.525898  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:35.525903  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:35.525908  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:35.525918  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:35 GMT
	I0103 19:19:35.526068  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:36.023330  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:36.023361  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:36.023369  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:36.023377  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:36.025819  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:36.025841  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:36.025851  102835 round_trippers.go:580]     Audit-Id: 3a6cd116-c32e-4e6e-a330-a1e1cb5a7a53
	I0103 19:19:36.025856  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:36.025861  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:36.025866  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:36.025872  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:36.025879  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:36 GMT
	I0103 19:19:36.026055  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:36.523707  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:36.523731  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:36.523744  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:36.523754  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:36.525946  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:36.525964  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:36.525972  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:36.525980  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:36.525988  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:36.525996  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:36.526005  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:36 GMT
	I0103 19:19:36.526018  102835 round_trippers.go:580]     Audit-Id: f86c4409-125a-4ed6-a278-31004989118a
	I0103 19:19:36.526164  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:36.526573  102835 node_ready.go:58] node "multinode-867906" has status "Ready":"False"
	I0103 19:19:37.023386  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:37.023406  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:37.023413  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:37.023419  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:37.025700  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:37.025723  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:37.025730  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:37.025735  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:37.025740  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:37.025746  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:37 GMT
	I0103 19:19:37.025751  102835 round_trippers.go:580]     Audit-Id: fd593b4b-6c61-452f-871b-0180bbcc6fc5
	I0103 19:19:37.025756  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:37.025898  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:37.523526  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:37.523557  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:37.523567  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:37.523577  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:37.526067  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:37.526087  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:37.526094  102835 round_trippers.go:580]     Audit-Id: f5dee619-e910-4523-b247-f466e87653b1
	I0103 19:19:37.526100  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:37.526107  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:37.526115  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:37.526123  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:37.526149  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:37 GMT
	I0103 19:19:37.526321  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:38.024058  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:38.024084  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:38.024092  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:38.024098  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:38.026291  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:38.026308  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:38.026317  102835 round_trippers.go:580]     Audit-Id: 4983f9f0-831a-4d67-96c8-e935185d20da
	I0103 19:19:38.026323  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:38.026329  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:38.026334  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:38.026340  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:38.026345  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:38 GMT
	I0103 19:19:38.026521  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:38.523154  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:38.523177  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:38.523184  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:38.523190  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:38.525432  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:38.525458  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:38.525467  102835 round_trippers.go:580]     Audit-Id: dc08c6cc-8003-412f-b099-2c9b8cc0b2d9
	I0103 19:19:38.525476  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:38.525484  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:38.525491  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:38.525502  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:38.525511  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:38 GMT
	I0103 19:19:38.525724  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:39.023105  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:39.023127  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:39.023135  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:39.023141  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:39.025316  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:39.025336  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:39.025345  102835 round_trippers.go:580]     Audit-Id: 333c4f6d-e6ec-4dda-954b-87740633cf84
	I0103 19:19:39.025353  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:39.025361  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:39.025376  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:39.025385  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:39.025394  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:39 GMT
	I0103 19:19:39.025488  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:39.025815  102835 node_ready.go:58] node "multinode-867906" has status "Ready":"False"
	I0103 19:19:39.523402  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:39.523422  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:39.523432  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:39.523441  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:39.525541  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:39.525560  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:39.525567  102835 round_trippers.go:580]     Audit-Id: 27e38cfb-faa7-4b57-96a1-0c30570308d0
	I0103 19:19:39.525572  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:39.525578  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:39.525583  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:39.525590  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:39.525617  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:39 GMT
	I0103 19:19:39.525734  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:40.023298  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:40.023321  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:40.023329  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:40.023335  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:40.025617  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:40.025638  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:40.025648  102835 round_trippers.go:580]     Audit-Id: 5d1182bf-9f42-4a2c-ac81-a5379a9c7c2f
	I0103 19:19:40.025656  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:40.025662  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:40.025669  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:40.025676  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:40.025685  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:40 GMT
	I0103 19:19:40.025839  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:40.523271  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:40.523295  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:40.523303  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:40.523310  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:40.525459  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:40.525476  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:40.525483  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:40.525488  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:40.525493  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:40.525498  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:40.525504  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:40 GMT
	I0103 19:19:40.525509  102835 round_trippers.go:580]     Audit-Id: 4e130d08-9756-4483-94a0-4cc5dcc084ab
	I0103 19:19:40.525633  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:41.023235  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:41.023256  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:41.023264  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:41.023270  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:41.025554  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:41.025576  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:41.025588  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:41.025595  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:41.025602  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:41 GMT
	I0103 19:19:41.025610  102835 round_trippers.go:580]     Audit-Id: d7bb35b3-5e4d-4b4c-abf1-63d0e16ce940
	I0103 19:19:41.025619  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:41.025630  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:41.025758  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:41.026089  102835 node_ready.go:58] node "multinode-867906" has status "Ready":"False"
	I0103 19:19:41.523387  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:41.523408  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:41.523417  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:41.523423  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:41.525677  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:41.525696  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:41.525703  102835 round_trippers.go:580]     Audit-Id: 0d0dab48-b135-4ab8-accd-2aea1b1b9f3a
	I0103 19:19:41.525710  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:41.525718  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:41.525727  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:41.525738  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:41.525746  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:41 GMT
	I0103 19:19:41.525881  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:42.023473  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:42.023506  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:42.023517  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:42.023530  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:42.025798  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:42.025816  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:42.025823  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:42.025828  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:42.025833  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:42.025841  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:42.025849  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:42 GMT
	I0103 19:19:42.025856  102835 round_trippers.go:580]     Audit-Id: 0cff2889-1878-4a85-bda2-01b8ab747da3
	I0103 19:19:42.025974  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:42.523562  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:42.523590  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:42.523602  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:42.523611  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:42.525717  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:42.525733  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:42.525740  102835 round_trippers.go:580]     Audit-Id: 02675c5a-e150-4540-9d0b-2f4ab582e6d3
	I0103 19:19:42.525745  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:42.525750  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:42.525755  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:42.525760  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:42.525765  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:42 GMT
	I0103 19:19:42.525907  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:43.023550  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:43.023572  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:43.023579  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:43.023585  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:43.026038  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:43.026063  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:43.026074  102835 round_trippers.go:580]     Audit-Id: e006be38-f8fc-4fa8-9339-89c4ca3e51ab
	I0103 19:19:43.026081  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:43.026086  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:43.026091  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:43.026097  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:43.026102  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:43 GMT
	I0103 19:19:43.026240  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:43.026576  102835 node_ready.go:58] node "multinode-867906" has status "Ready":"False"
	I0103 19:19:43.523332  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:43.523352  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:43.523359  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:43.523365  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:43.525459  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:43.525480  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:43.525495  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:43 GMT
	I0103 19:19:43.525501  102835 round_trippers.go:580]     Audit-Id: 853291ee-d721-4cbe-bdd6-1bb4a8ffe016
	I0103 19:19:43.525508  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:43.525516  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:43.525525  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:43.525536  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:43.525740  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:44.023490  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:44.023510  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:44.023521  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:44.023528  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:44.025723  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:44.025746  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:44.025755  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:44.025761  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:44 GMT
	I0103 19:19:44.025766  102835 round_trippers.go:580]     Audit-Id: a3d7491f-51ac-4baa-9716-b5e3b8abcc30
	I0103 19:19:44.025771  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:44.025777  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:44.025782  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:44.025896  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:44.523342  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:44.523365  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:44.523372  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:44.523379  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:44.525388  102835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:19:44.525407  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:44.525413  102835 round_trippers.go:580]     Audit-Id: 22fb4972-1d18-4e2a-a322-50f0696fa092
	I0103 19:19:44.525418  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:44.525424  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:44.525429  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:44.525434  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:44.525439  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:44 GMT
	I0103 19:19:44.525605  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:45.023158  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:45.023178  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:45.023186  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:45.023192  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:45.025405  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:45.025427  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:45.025436  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:45.025444  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:45.025451  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:45.025460  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:45 GMT
	I0103 19:19:45.025469  102835 round_trippers.go:580]     Audit-Id: 40194d30-87ad-4bc6-875f-ac81d124aa99
	I0103 19:19:45.025482  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:45.025583  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:45.523145  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:45.523167  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:45.523175  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:45.523181  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:45.525481  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:45.525503  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:45.525515  102835 round_trippers.go:580]     Audit-Id: 4d8c67fd-f2de-4037-9646-5712d3bcb8ed
	I0103 19:19:45.525523  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:45.525530  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:45.525537  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:45.525544  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:45.525553  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:45 GMT
	I0103 19:19:45.525672  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:45.525980  102835 node_ready.go:58] node "multinode-867906" has status "Ready":"False"
	I0103 19:19:46.023247  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:46.023266  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:46.023275  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:46.023280  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:46.025720  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:46.025743  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:46.025753  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:46 GMT
	I0103 19:19:46.025761  102835 round_trippers.go:580]     Audit-Id: 124c8a18-5dc7-47a6-846b-e4bffab7e649
	I0103 19:19:46.025768  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:46.025775  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:46.025788  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:46.025796  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:46.025932  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:46.523525  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:46.523549  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:46.523564  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:46.523570  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:46.526570  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:46.526606  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:46.526616  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:46 GMT
	I0103 19:19:46.526624  102835 round_trippers.go:580]     Audit-Id: 9863324d-15fc-4787-8bab-53256ddb6726
	I0103 19:19:46.526633  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:46.526640  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:46.526648  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:46.526664  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:46.526863  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:47.023323  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:47.023357  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:47.023365  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:47.023371  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:47.025650  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:47.025672  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:47.025678  102835 round_trippers.go:580]     Audit-Id: ee339e3d-585e-454b-baff-723b9c7e62cc
	I0103 19:19:47.025684  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:47.025689  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:47.025694  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:47.025700  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:47.025723  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:47 GMT
	I0103 19:19:47.025853  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:47.523122  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:47.523144  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:47.523152  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:47.523158  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:47.525403  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:47.525425  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:47.525434  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:47.525443  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:47.525451  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:47.525459  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:47.525466  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:47 GMT
	I0103 19:19:47.525474  102835 round_trippers.go:580]     Audit-Id: fbfb90dc-ca31-4bca-a63d-8c6a56f9bed0
	I0103 19:19:47.525586  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:48.023170  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:48.023192  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:48.023200  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:48.023206  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:48.025531  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:48.025550  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:48.025557  102835 round_trippers.go:580]     Audit-Id: d8dc5857-dee7-45f3-bc52-b0386821aede
	I0103 19:19:48.025563  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:48.025568  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:48.025573  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:48.025580  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:48.025593  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:48 GMT
	I0103 19:19:48.025723  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:48.026067  102835 node_ready.go:58] node "multinode-867906" has status "Ready":"False"
	I0103 19:19:48.523358  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:48.523377  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:48.523385  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:48.523391  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:48.525457  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:48.525475  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:48.525482  102835 round_trippers.go:580]     Audit-Id: d6a01c13-5b61-4288-a0b7-b56d51583d88
	I0103 19:19:48.525487  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:48.525492  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:48.525497  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:48.525502  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:48.525507  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:48 GMT
	I0103 19:19:48.525721  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:49.023317  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:49.023352  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:49.023366  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:49.023374  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:49.025589  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:49.025608  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:49.025615  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:49.025621  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:49 GMT
	I0103 19:19:49.025626  102835 round_trippers.go:580]     Audit-Id: 66261643-45e3-4708-b614-cf01e04e7b78
	I0103 19:19:49.025631  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:49.025636  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:49.025641  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:49.025798  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:49.523815  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:49.523839  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:49.523873  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:49.523880  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:49.526161  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:49.526185  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:49.526195  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:49.526202  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:49.526210  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:49.526217  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:49 GMT
	I0103 19:19:49.526228  102835 round_trippers.go:580]     Audit-Id: 6cc20c0e-86bc-4461-8e51-26dac9ffb185
	I0103 19:19:49.526235  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:49.526418  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:50.024044  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:50.024065  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:50.024081  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:50.024088  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:50.026175  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:50.026196  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:50.026203  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:50.026209  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:50.026214  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:50.026219  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:50 GMT
	I0103 19:19:50.026224  102835 round_trippers.go:580]     Audit-Id: b81c8e7e-7d4c-4aae-91a7-f6c6788c4678
	I0103 19:19:50.026229  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:50.026362  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:50.026655  102835 node_ready.go:58] node "multinode-867906" has status "Ready":"False"
	I0103 19:19:50.523818  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:50.523839  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:50.523847  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:50.523854  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:50.525996  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:50.526020  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:50.526031  102835 round_trippers.go:580]     Audit-Id: 09d89a01-4def-4f07-90e5-3ff2d9942792
	I0103 19:19:50.526039  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:50.526046  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:50.526055  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:50.526063  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:50.526076  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:50 GMT
	I0103 19:19:50.526295  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:51.023369  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:51.023394  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:51.023402  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:51.023408  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:51.025762  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:51.025781  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:51.025791  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:51.025798  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:51 GMT
	I0103 19:19:51.025805  102835 round_trippers.go:580]     Audit-Id: fef1076c-cfd3-4ea3-b661-a74803908414
	I0103 19:19:51.025813  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:51.025820  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:51.025829  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:51.025996  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:51.523377  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:51.523400  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:51.523407  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:51.523414  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:51.525591  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:51.525617  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:51.525626  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:51.525634  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:51.525643  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:51 GMT
	I0103 19:19:51.525651  102835 round_trippers.go:580]     Audit-Id: 8ff0226e-d838-4bd7-967e-fb224c05ebc1
	I0103 19:19:51.525662  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:51.525673  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:51.525860  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:52.023452  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:52.023481  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:52.023493  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:52.023502  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:52.025684  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:52.025702  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:52.025708  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:52.025714  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:52.025719  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:52 GMT
	I0103 19:19:52.025724  102835 round_trippers.go:580]     Audit-Id: d19862fd-4704-48d9-b1e2-2141aab866e7
	I0103 19:19:52.025729  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:52.025744  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:52.025928  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:52.523446  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:52.523469  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:52.523477  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:52.523485  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:52.525728  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:52.525747  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:52.525754  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:52 GMT
	I0103 19:19:52.525759  102835 round_trippers.go:580]     Audit-Id: 5b63c57a-204a-4594-b8d0-be9d7402a0ef
	I0103 19:19:52.525764  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:52.525770  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:52.525774  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:52.525779  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:52.525929  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:52.526241  102835 node_ready.go:58] node "multinode-867906" has status "Ready":"False"
	I0103 19:19:53.023521  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:53.023541  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:53.023548  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:53.023555  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:53.025661  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:53.025678  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:53.025685  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:53.025690  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:53 GMT
	I0103 19:19:53.025695  102835 round_trippers.go:580]     Audit-Id: 7b858d67-772d-45c0-bbf8-49d47da05c89
	I0103 19:19:53.025701  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:53.025710  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:53.025721  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:53.025838  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:53.523363  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:53.523387  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:53.523395  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:53.523401  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:53.525614  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:53.525633  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:53.525644  102835 round_trippers.go:580]     Audit-Id: d99ef4e9-6d81-43cb-af99-1a8bc5925493
	I0103 19:19:53.525652  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:53.525660  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:53.525668  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:53.525674  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:53.525686  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:53 GMT
	I0103 19:19:53.525826  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:54.023591  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:54.023612  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:54.023620  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:54.023626  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:54.025661  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:54.025684  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:54.025693  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:54.025700  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:54 GMT
	I0103 19:19:54.025708  102835 round_trippers.go:580]     Audit-Id: 06dd44e9-8008-4b8f-a73f-d07111a3dcab
	I0103 19:19:54.025716  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:54.025725  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:54.025734  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:54.025848  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:54.523743  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:54.523768  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:54.523776  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:54.523782  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:54.526033  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:54.526051  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:54.526061  102835 round_trippers.go:580]     Audit-Id: ed52ecd5-f4a5-41c0-ada1-25d5da11eb88
	I0103 19:19:54.526070  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:54.526078  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:54.526086  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:54.526098  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:54.526106  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:54 GMT
	I0103 19:19:54.526264  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:54.526610  102835 node_ready.go:58] node "multinode-867906" has status "Ready":"False"
	I0103 19:19:55.023952  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:55.023976  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:55.023984  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:55.023989  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:55.026246  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:55.026271  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:55.026280  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:55.026288  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:55.026295  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:55.026303  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:55.026309  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:55 GMT
	I0103 19:19:55.026316  102835 round_trippers.go:580]     Audit-Id: 438d3815-6529-47f0-b3f3-1efcafb96486
	I0103 19:19:55.026495  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:55.523087  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:55.523113  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:55.523125  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:55.523135  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:55.525463  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:55.525482  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:55.525493  102835 round_trippers.go:580]     Audit-Id: 8c84cf0e-8733-424b-9fbb-5ff9aff5dcdc
	I0103 19:19:55.525501  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:55.525510  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:55.525518  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:55.525527  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:55.525537  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:55 GMT
	I0103 19:19:55.525684  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:56.023285  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:56.023314  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:56.023326  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:56.023336  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:56.025530  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:56.025552  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:56.025561  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:56.025569  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:56.025577  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:56.025584  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:56.025597  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:56 GMT
	I0103 19:19:56.025613  102835 round_trippers.go:580]     Audit-Id: 9a220909-b9e6-4308-99eb-02798d8e469a
	I0103 19:19:56.025798  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:56.523151  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:56.523191  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:56.523200  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:56.523205  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:56.525377  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:56.525395  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:56.525402  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:56.525407  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:56.525412  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:56.525418  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:56 GMT
	I0103 19:19:56.525423  102835 round_trippers.go:580]     Audit-Id: 68273a7c-db1f-4936-a2b4-c5759612ed01
	I0103 19:19:56.525428  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:56.525590  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:57.023234  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:57.023259  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:57.023267  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:57.023274  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:57.025397  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:57.025419  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:57.025428  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:57.025446  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:57 GMT
	I0103 19:19:57.025454  102835 round_trippers.go:580]     Audit-Id: 3b76e633-3f50-4436-9662-0582e27992eb
	I0103 19:19:57.025462  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:57.025471  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:57.025484  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:57.025616  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:57.025927  102835 node_ready.go:58] node "multinode-867906" has status "Ready":"False"
	I0103 19:19:57.523182  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:57.523224  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:57.523233  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:57.523239  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:57.525455  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:57.525473  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:57.525479  102835 round_trippers.go:580]     Audit-Id: e8eba617-8dfc-4302-aeb7-a55cab1baef5
	I0103 19:19:57.525485  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:57.525490  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:57.525495  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:57.525500  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:57.525505  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:57 GMT
	I0103 19:19:57.525641  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:58.023133  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:58.023157  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:58.023165  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:58.023171  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:58.025531  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:58.025550  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:58.025557  102835 round_trippers.go:580]     Audit-Id: 5e89e735-a34e-4395-9fcd-17fb515668d6
	I0103 19:19:58.025562  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:58.025567  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:58.025572  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:58.025577  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:58.025583  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:58 GMT
	I0103 19:19:58.025745  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:58.523360  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:58.523383  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:58.523391  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:58.523397  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:58.525632  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:58.525650  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:58.525657  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:58.525662  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:58.525667  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:58.525677  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:58.525683  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:58 GMT
	I0103 19:19:58.525691  102835 round_trippers.go:580]     Audit-Id: 0316a19c-fb1c-4f31-be67-f4765fee546a
	I0103 19:19:58.525830  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:59.023405  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:59.023427  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:59.023435  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:59.023440  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:59.025505  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:59.025522  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:59.025528  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:59.025534  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:59.025539  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:59 GMT
	I0103 19:19:59.025544  102835 round_trippers.go:580]     Audit-Id: 7f66b603-15ed-4e60-8359-f571d3483889
	I0103 19:19:59.025549  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:59.025554  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:59.025710  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:19:59.026046  102835 node_ready.go:58] node "multinode-867906" has status "Ready":"False"
	I0103 19:19:59.523733  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:19:59.523755  102835 round_trippers.go:469] Request Headers:
	I0103 19:19:59.523766  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:19:59.523774  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:19:59.526311  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:19:59.526332  102835 round_trippers.go:577] Response Headers:
	I0103 19:19:59.526340  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:19:59 GMT
	I0103 19:19:59.526346  102835 round_trippers.go:580]     Audit-Id: e0648dff-d974-4fba-a7b9-120c1be9973f
	I0103 19:19:59.526351  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:19:59.526356  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:19:59.526366  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:19:59.526372  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:19:59.526600  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:20:00.023164  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:20:00.023188  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:00.023196  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:00.023205  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:00.025438  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:00.025456  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:00.025463  102835 round_trippers.go:580]     Audit-Id: 1882aaa1-7e32-4fca-81d4-7e6f0ee8cf9d
	I0103 19:20:00.025469  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:00.025474  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:00.025479  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:00.025484  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:00.025490  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:00 GMT
	I0103 19:20:00.025628  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:20:00.523270  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:20:00.523296  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:00.523309  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:00.523319  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:00.525601  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:00.525621  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:00.525628  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:00.525634  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:00 GMT
	I0103 19:20:00.525643  102835 round_trippers.go:580]     Audit-Id: a407f11d-9a9f-4dc2-ae3e-d4f12f11e67c
	I0103 19:20:00.525648  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:00.525654  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:00.525663  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:00.525802  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"295","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0103 19:20:01.023186  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:20:01.023209  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:01.023219  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:01.023226  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:01.027599  102835 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0103 19:20:01.027621  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:01.027632  102835 round_trippers.go:580]     Audit-Id: 86977220-1b2d-454f-a95f-32a69b83c25b
	I0103 19:20:01.027641  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:01.027650  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:01.027659  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:01.027671  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:01.027682  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:01 GMT
	I0103 19:20:01.027791  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"385","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0103 19:20:01.028116  102835 node_ready.go:49] node "multinode-867906" has status "Ready":"True"
	I0103 19:20:01.028132  102835 node_ready.go:38] duration metric: took 30.505228664s waiting for node "multinode-867906" to be "Ready" ...
	I0103 19:20:01.028142  102835 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:20:01.028200  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0103 19:20:01.028205  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:01.028212  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:01.028217  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:01.031258  102835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:20:01.031282  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:01.031296  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:01.031305  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:01.031314  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:01 GMT
	I0103 19:20:01.031323  102835 round_trippers.go:580]     Audit-Id: bac43fae-2ad1-4ccd-8f3d-843374517936
	I0103 19:20:01.031330  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:01.031341  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:01.031816  102835 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"398"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qb6ll","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a10d6003-2e28-4c8f-a743-87a3a9e768be","resourceVersion":"391","creationTimestamp":"2024-01-03T19:19:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ffd7e003-2e2d-44e7-8e89-948ccc4a0c83","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:19:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffd7e003-2e2d-44e7-8e89-948ccc4a0c83\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I0103 19:20:01.036286  102835 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qb6ll" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:01.036365  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qb6ll
	I0103 19:20:01.036377  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:01.036388  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:01.036400  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:01.038362  102835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:20:01.038383  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:01.038395  102835 round_trippers.go:580]     Audit-Id: 1b978dbc-470b-45c9-b6b7-656de63d3ed2
	I0103 19:20:01.038404  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:01.038414  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:01.038421  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:01.038433  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:01.038446  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:01 GMT
	I0103 19:20:01.038555  102835 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qb6ll","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a10d6003-2e28-4c8f-a743-87a3a9e768be","resourceVersion":"391","creationTimestamp":"2024-01-03T19:19:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ffd7e003-2e2d-44e7-8e89-948ccc4a0c83","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:19:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffd7e003-2e2d-44e7-8e89-948ccc4a0c83\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0103 19:20:01.039034  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:20:01.039051  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:01.039062  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:01.039075  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:01.040607  102835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:20:01.040625  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:01.040634  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:01.040643  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:01 GMT
	I0103 19:20:01.040652  102835 round_trippers.go:580]     Audit-Id: 79deffa5-5dd1-4caf-9dda-e07c4365822c
	I0103 19:20:01.040668  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:01.040676  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:01.040688  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:01.040832  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"385","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0103 19:20:01.537083  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qb6ll
	I0103 19:20:01.537106  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:01.537122  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:01.537128  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:01.539494  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:01.539513  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:01.539520  102835 round_trippers.go:580]     Audit-Id: a844a0fe-745c-4363-8cb7-bb1ed0de0a5c
	I0103 19:20:01.539526  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:01.539531  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:01.539536  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:01.539541  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:01.539547  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:01 GMT
	I0103 19:20:01.539734  102835 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qb6ll","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a10d6003-2e28-4c8f-a743-87a3a9e768be","resourceVersion":"401","creationTimestamp":"2024-01-03T19:19:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ffd7e003-2e2d-44e7-8e89-948ccc4a0c83","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:19:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffd7e003-2e2d-44e7-8e89-948ccc4a0c83\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0103 19:20:01.540259  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:20:01.540279  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:01.540286  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:01.540294  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:01.542224  102835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:20:01.542241  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:01.542247  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:01.542253  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:01.542258  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:01.542263  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:01.542268  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:01 GMT
	I0103 19:20:01.542276  102835 round_trippers.go:580]     Audit-Id: cfda61dc-2f47-4c14-b361-6705032eb841
	I0103 19:20:01.542443  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"385","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0103 19:20:02.036658  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qb6ll
	I0103 19:20:02.036681  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:02.036689  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:02.036695  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:02.039211  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:02.039233  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:02.039240  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:02.039246  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:02.039251  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:02.039256  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:02.039261  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:02 GMT
	I0103 19:20:02.039266  102835 round_trippers.go:580]     Audit-Id: 43eaf422-e39c-4b11-8b42-160aa130522c
	I0103 19:20:02.039476  102835 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qb6ll","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a10d6003-2e28-4c8f-a743-87a3a9e768be","resourceVersion":"401","creationTimestamp":"2024-01-03T19:19:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ffd7e003-2e2d-44e7-8e89-948ccc4a0c83","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:19:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffd7e003-2e2d-44e7-8e89-948ccc4a0c83\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0103 19:20:02.039915  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:20:02.039927  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:02.039934  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:02.039940  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:02.041831  102835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:20:02.041847  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:02.041853  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:02 GMT
	I0103 19:20:02.041859  102835 round_trippers.go:580]     Audit-Id: ac94d8c4-a1b3-48dc-a415-6a41684362ae
	I0103 19:20:02.041864  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:02.041869  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:02.041875  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:02.041884  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:02.042073  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"385","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0103 19:20:02.537481  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qb6ll
	I0103 19:20:02.537512  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:02.537523  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:02.537531  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:02.539969  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:02.539996  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:02.540007  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:02.540018  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:02.540026  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:02.540033  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:02 GMT
	I0103 19:20:02.540040  102835 round_trippers.go:580]     Audit-Id: ee5ff27a-e945-41ca-a60b-0656c149fc79
	I0103 19:20:02.540048  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:02.540224  102835 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qb6ll","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a10d6003-2e28-4c8f-a743-87a3a9e768be","resourceVersion":"405","creationTimestamp":"2024-01-03T19:19:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ffd7e003-2e2d-44e7-8e89-948ccc4a0c83","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:19:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffd7e003-2e2d-44e7-8e89-948ccc4a0c83\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0103 19:20:02.540677  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:20:02.540691  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:02.540698  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:02.540705  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:02.542610  102835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:20:02.542633  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:02.542643  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:02.542651  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:02.542659  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:02.542667  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:02 GMT
	I0103 19:20:02.542681  102835 round_trippers.go:580]     Audit-Id: aa42ea44-b43e-4b66-95ea-a022590e9638
	I0103 19:20:02.542692  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:02.542879  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"385","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0103 19:20:02.543199  102835 pod_ready.go:92] pod "coredns-5dd5756b68-qb6ll" in "kube-system" namespace has status "Ready":"True"
	I0103 19:20:02.543216  102835 pod_ready.go:81] duration metric: took 1.506906512s waiting for pod "coredns-5dd5756b68-qb6ll" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:02.543225  102835 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-867906" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:02.543276  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-867906
	I0103 19:20:02.543283  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:02.543290  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:02.543296  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:02.545322  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:02.545344  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:02.545354  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:02.545362  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:02 GMT
	I0103 19:20:02.545370  102835 round_trippers.go:580]     Audit-Id: 39687cd4-3ec3-4c30-bb9b-53e254d5d495
	I0103 19:20:02.545377  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:02.545387  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:02.545394  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:02.545514  102835 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-867906","namespace":"kube-system","uid":"e218d02e-1660-479e-91d7-9a25bce7cbc1","resourceVersion":"277","creationTimestamp":"2024-01-03T19:19:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"096508eeb789ebd52eb384a7c8522295","kubernetes.io/config.mirror":"096508eeb789ebd52eb384a7c8522295","kubernetes.io/config.seen":"2024-01-03T19:19:15.888143739Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:19:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0103 19:20:02.545986  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:20:02.546001  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:02.546012  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:02.546022  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:02.547707  102835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:20:02.547722  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:02.547729  102835 round_trippers.go:580]     Audit-Id: 52ad8d27-15bf-4a81-8962-643e7add5322
	I0103 19:20:02.547736  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:02.547742  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:02.547748  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:02.547756  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:02.547764  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:02 GMT
	I0103 19:20:02.547925  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"385","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0103 19:20:02.548204  102835 pod_ready.go:92] pod "etcd-multinode-867906" in "kube-system" namespace has status "Ready":"True"
	I0103 19:20:02.548220  102835 pod_ready.go:81] duration metric: took 4.989614ms waiting for pod "etcd-multinode-867906" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:02.548231  102835 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-867906" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:02.548278  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-867906
	I0103 19:20:02.548285  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:02.548292  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:02.548297  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:02.550131  102835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:20:02.550170  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:02.550180  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:02 GMT
	I0103 19:20:02.550188  102835 round_trippers.go:580]     Audit-Id: e7a4262d-6244-4858-aa6b-9b3287375a1b
	I0103 19:20:02.550196  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:02.550205  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:02.550213  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:02.550229  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:02.550432  102835 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-867906","namespace":"kube-system","uid":"1f53d173-6053-4eae-aaa9-8ffcb1c17634","resourceVersion":"260","creationTimestamp":"2024-01-03T19:19:14Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"a0b48c6d0d511ddb918d1ee65203574b","kubernetes.io/config.mirror":"a0b48c6d0d511ddb918d1ee65203574b","kubernetes.io/config.seen":"2024-01-03T19:19:10.132179438Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:19:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0103 19:20:02.550837  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:20:02.550850  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:02.550857  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:02.550862  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:02.552509  102835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:20:02.552524  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:02.552530  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:02 GMT
	I0103 19:20:02.552536  102835 round_trippers.go:580]     Audit-Id: f270ffd3-8e4a-4e99-847f-13eaa15f0885
	I0103 19:20:02.552541  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:02.552550  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:02.552557  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:02.552565  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:02.552704  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"385","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0103 19:20:02.553026  102835 pod_ready.go:92] pod "kube-apiserver-multinode-867906" in "kube-system" namespace has status "Ready":"True"
	I0103 19:20:02.553043  102835 pod_ready.go:81] duration metric: took 4.806165ms waiting for pod "kube-apiserver-multinode-867906" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:02.553057  102835 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-867906" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:02.553114  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-867906
	I0103 19:20:02.553123  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:02.553129  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:02.553135  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:02.554977  102835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:20:02.554995  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:02.555002  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:02.555007  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:02.555012  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:02 GMT
	I0103 19:20:02.555017  102835 round_trippers.go:580]     Audit-Id: 01dd919f-9e83-4247-8be6-4b44cc0f3677
	I0103 19:20:02.555022  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:02.555027  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:02.555215  102835 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-867906","namespace":"kube-system","uid":"528f1b6f-da53-4e14-87dc-90af9b16865b","resourceVersion":"256","creationTimestamp":"2024-01-03T19:19:16Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2c9e1aa27124c3c4642e5059650a8424","kubernetes.io/config.mirror":"2c9e1aa27124c3c4642e5059650a8424","kubernetes.io/config.seen":"2024-01-03T19:19:15.888157675Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:19:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0103 19:20:02.555652  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:20:02.555668  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:02.555675  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:02.555681  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:02.557428  102835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:20:02.557443  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:02.557450  102835 round_trippers.go:580]     Audit-Id: 4253808a-9798-423f-aadf-d469697d5705
	I0103 19:20:02.557456  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:02.557461  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:02.557466  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:02.557471  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:02.557476  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:02 GMT
	I0103 19:20:02.557629  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"385","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0103 19:20:02.557912  102835 pod_ready.go:92] pod "kube-controller-manager-multinode-867906" in "kube-system" namespace has status "Ready":"True"
	I0103 19:20:02.557928  102835 pod_ready.go:81] duration metric: took 4.859952ms waiting for pod "kube-controller-manager-multinode-867906" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:02.557937  102835 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nrm8b" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:02.557978  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrm8b
	I0103 19:20:02.557986  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:02.557992  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:02.557998  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:02.559811  102835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:20:02.559828  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:02.559835  102835 round_trippers.go:580]     Audit-Id: 221aec70-de03-4744-87b0-e71a2c6bcf04
	I0103 19:20:02.559840  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:02.559845  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:02.559851  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:02.559856  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:02.559863  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:02 GMT
	I0103 19:20:02.560006  102835 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nrm8b","generateName":"kube-proxy-","namespace":"kube-system","uid":"025f5c46-e360-423d-9c4f-eee8af0472ae","resourceVersion":"372","creationTimestamp":"2024-01-03T19:19:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4bb84658-ab5f-48b7-bb1e-58fdc441b4c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:19:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4bb84658-ab5f-48b7-bb1e-58fdc441b4c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0103 19:20:02.623622  102835 request.go:629] Waited for 63.246955ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:20:02.623700  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:20:02.623706  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:02.623717  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:02.623731  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:02.626122  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:02.626161  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:02.626172  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:02.626181  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:02.626186  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:02 GMT
	I0103 19:20:02.626191  102835 round_trippers.go:580]     Audit-Id: cd5c6353-9710-42e5-9a7a-249953007dc6
	I0103 19:20:02.626196  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:02.626201  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:02.626377  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"385","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0103 19:20:02.626672  102835 pod_ready.go:92] pod "kube-proxy-nrm8b" in "kube-system" namespace has status "Ready":"True"
	I0103 19:20:02.626690  102835 pod_ready.go:81] duration metric: took 68.747117ms waiting for pod "kube-proxy-nrm8b" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:02.626703  102835 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-867906" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:02.824168  102835 request.go:629] Waited for 197.390575ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-867906
	I0103 19:20:02.824254  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-867906
	I0103 19:20:02.824259  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:02.824267  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:02.824273  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:02.826994  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:02.827012  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:02.827019  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:02.827024  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:02.827030  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:02.827038  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:02.827047  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:02 GMT
	I0103 19:20:02.827056  102835 round_trippers.go:580]     Audit-Id: 81981d64-6296-4b13-8c70-716ee4db87e2
	I0103 19:20:02.827226  102835 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-867906","namespace":"kube-system","uid":"2a794cae-9d56-476c-9b6b-51742cdf9118","resourceVersion":"258","creationTimestamp":"2024-01-03T19:19:16Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6888e32d69fb1b48c672bb546f324150","kubernetes.io/config.mirror":"6888e32d69fb1b48c672bb546f324150","kubernetes.io/config.seen":"2024-01-03T19:19:15.888158968Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:19:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0103 19:20:03.023964  102835 request.go:629] Waited for 196.387264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:20:03.024018  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:20:03.024026  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:03.024037  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:03.024051  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:03.026436  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:03.026454  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:03.026462  102835 round_trippers.go:580]     Audit-Id: 06972483-9cdb-4792-85fc-ee482f6ee6f1
	I0103 19:20:03.026467  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:03.026473  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:03.026478  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:03.026483  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:03.026488  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:03 GMT
	I0103 19:20:03.026624  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"385","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0103 19:20:03.027030  102835 pod_ready.go:92] pod "kube-scheduler-multinode-867906" in "kube-system" namespace has status "Ready":"True"
	I0103 19:20:03.027053  102835 pod_ready.go:81] duration metric: took 400.342822ms waiting for pod "kube-scheduler-multinode-867906" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:03.027066  102835 pod_ready.go:38] duration metric: took 1.998910689s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:20:03.027082  102835 api_server.go:52] waiting for apiserver process to appear ...
	I0103 19:20:03.027166  102835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 19:20:03.036595  102835 command_runner.go:130] > 1423
	I0103 19:20:03.037387  102835 api_server.go:72] duration metric: took 33.059122289s to wait for apiserver process to appear ...
	I0103 19:20:03.037410  102835 api_server.go:88] waiting for apiserver healthz status ...
	I0103 19:20:03.037428  102835 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0103 19:20:03.041661  102835 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0103 19:20:03.041731  102835 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0103 19:20:03.041742  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:03.041751  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:03.041760  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:03.042870  102835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:20:03.042884  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:03.042891  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:03.042896  102835 round_trippers.go:580]     Content-Length: 264
	I0103 19:20:03.042902  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:03 GMT
	I0103 19:20:03.042907  102835 round_trippers.go:580]     Audit-Id: 5da23727-20a0-4b5a-9e57-e16679817192
	I0103 19:20:03.042912  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:03.042917  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:03.042922  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:03.042938  102835 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0103 19:20:03.043040  102835 api_server.go:141] control plane version: v1.28.4
	I0103 19:20:03.043061  102835 api_server.go:131] duration metric: took 5.645613ms to wait for apiserver health ...
	I0103 19:20:03.043069  102835 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 19:20:03.223334  102835 request.go:629] Waited for 180.200763ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0103 19:20:03.223408  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0103 19:20:03.223413  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:03.223420  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:03.223426  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:03.226486  102835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:20:03.226514  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:03.226524  102835 round_trippers.go:580]     Audit-Id: 155294b4-3335-41fe-b109-ac6a03194d4b
	I0103 19:20:03.226531  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:03.226539  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:03.226546  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:03.226554  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:03.226566  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:03 GMT
	I0103 19:20:03.227042  102835 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"410"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qb6ll","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a10d6003-2e28-4c8f-a743-87a3a9e768be","resourceVersion":"405","creationTimestamp":"2024-01-03T19:19:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ffd7e003-2e2d-44e7-8e89-948ccc4a0c83","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:19:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffd7e003-2e2d-44e7-8e89-948ccc4a0c83\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0103 19:20:03.228831  102835 system_pods.go:59] 8 kube-system pods found
	I0103 19:20:03.228859  102835 system_pods.go:61] "coredns-5dd5756b68-qb6ll" [a10d6003-2e28-4c8f-a743-87a3a9e768be] Running
	I0103 19:20:03.228867  102835 system_pods.go:61] "etcd-multinode-867906" [e218d02e-1660-479e-91d7-9a25bce7cbc1] Running
	I0103 19:20:03.228871  102835 system_pods.go:61] "kindnet-bzwc8" [bae42292-7c63-45ab-963e-34f9ffe22674] Running
	I0103 19:20:03.228878  102835 system_pods.go:61] "kube-apiserver-multinode-867906" [1f53d173-6053-4eae-aaa9-8ffcb1c17634] Running
	I0103 19:20:03.228883  102835 system_pods.go:61] "kube-controller-manager-multinode-867906" [528f1b6f-da53-4e14-87dc-90af9b16865b] Running
	I0103 19:20:03.228891  102835 system_pods.go:61] "kube-proxy-nrm8b" [025f5c46-e360-423d-9c4f-eee8af0472ae] Running
	I0103 19:20:03.228895  102835 system_pods.go:61] "kube-scheduler-multinode-867906" [2a794cae-9d56-476c-9b6b-51742cdf9118] Running
	I0103 19:20:03.228899  102835 system_pods.go:61] "storage-provisioner" [2e6896b5-2324-446b-b295-0d0a2b8ad24c] Running
	I0103 19:20:03.228905  102835 system_pods.go:74] duration metric: took 185.830887ms to wait for pod list to return data ...
	I0103 19:20:03.228915  102835 default_sa.go:34] waiting for default service account to be created ...
	I0103 19:20:03.423238  102835 request.go:629] Waited for 194.253853ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0103 19:20:03.423295  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0103 19:20:03.423300  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:03.423307  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:03.423313  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:03.425579  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:03.425598  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:03.425605  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:03 GMT
	I0103 19:20:03.425615  102835 round_trippers.go:580]     Audit-Id: 8eb9a90c-8d94-4cb8-973e-a4078f512681
	I0103 19:20:03.425620  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:03.425625  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:03.425630  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:03.425639  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:03.425644  102835 round_trippers.go:580]     Content-Length: 261
	I0103 19:20:03.425662  102835 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"410"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"54f4548d-20f2-4e2b-95a9-4d3f18792d06","resourceVersion":"325","creationTimestamp":"2024-01-03T19:19:29Z"}}]}
	I0103 19:20:03.425829  102835 default_sa.go:45] found service account: "default"
	I0103 19:20:03.425844  102835 default_sa.go:55] duration metric: took 196.923903ms for default service account to be created ...
	I0103 19:20:03.425851  102835 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 19:20:03.624264  102835 request.go:629] Waited for 198.356415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0103 19:20:03.624354  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0103 19:20:03.624361  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:03.624369  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:03.624378  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:03.627557  102835 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 19:20:03.627584  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:03.627595  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:03.627604  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:03.627613  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:03.627622  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:03 GMT
	I0103 19:20:03.627630  102835 round_trippers.go:580]     Audit-Id: 17cbe2c9-03b5-4d89-bb88-b00a9a8a7eed
	I0103 19:20:03.627639  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:03.628075  102835 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"410"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qb6ll","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a10d6003-2e28-4c8f-a743-87a3a9e768be","resourceVersion":"405","creationTimestamp":"2024-01-03T19:19:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ffd7e003-2e2d-44e7-8e89-948ccc4a0c83","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:19:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffd7e003-2e2d-44e7-8e89-948ccc4a0c83\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0103 19:20:03.629758  102835 system_pods.go:86] 8 kube-system pods found
	I0103 19:20:03.629778  102835 system_pods.go:89] "coredns-5dd5756b68-qb6ll" [a10d6003-2e28-4c8f-a743-87a3a9e768be] Running
	I0103 19:20:03.629783  102835 system_pods.go:89] "etcd-multinode-867906" [e218d02e-1660-479e-91d7-9a25bce7cbc1] Running
	I0103 19:20:03.629787  102835 system_pods.go:89] "kindnet-bzwc8" [bae42292-7c63-45ab-963e-34f9ffe22674] Running
	I0103 19:20:03.629791  102835 system_pods.go:89] "kube-apiserver-multinode-867906" [1f53d173-6053-4eae-aaa9-8ffcb1c17634] Running
	I0103 19:20:03.629795  102835 system_pods.go:89] "kube-controller-manager-multinode-867906" [528f1b6f-da53-4e14-87dc-90af9b16865b] Running
	I0103 19:20:03.629801  102835 system_pods.go:89] "kube-proxy-nrm8b" [025f5c46-e360-423d-9c4f-eee8af0472ae] Running
	I0103 19:20:03.629805  102835 system_pods.go:89] "kube-scheduler-multinode-867906" [2a794cae-9d56-476c-9b6b-51742cdf9118] Running
	I0103 19:20:03.629808  102835 system_pods.go:89] "storage-provisioner" [2e6896b5-2324-446b-b295-0d0a2b8ad24c] Running
	I0103 19:20:03.629814  102835 system_pods.go:126] duration metric: took 203.958431ms to wait for k8s-apps to be running ...
	I0103 19:20:03.629820  102835 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 19:20:03.629864  102835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:20:03.640376  102835 system_svc.go:56] duration metric: took 10.549464ms WaitForService to wait for kubelet.
	I0103 19:20:03.640396  102835 kubeadm.go:581] duration metric: took 33.662137965s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 19:20:03.640414  102835 node_conditions.go:102] verifying NodePressure condition ...
	I0103 19:20:03.823829  102835 request.go:629] Waited for 183.341651ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0103 19:20:03.823904  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0103 19:20:03.823912  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:03.823920  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:03.823928  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:03.826281  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:03.826304  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:03.826313  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:03.826323  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:03.826330  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:03.826338  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:03 GMT
	I0103 19:20:03.826349  102835 round_trippers.go:580]     Audit-Id: 4ac88e04-e412-4ae2-a802-02e8fd29235b
	I0103 19:20:03.826357  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:03.826493  102835 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"410"},"items":[{"metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"385","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I0103 19:20:03.826862  102835 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0103 19:20:03.826883  102835 node_conditions.go:123] node cpu capacity is 8
	I0103 19:20:03.826893  102835 node_conditions.go:105] duration metric: took 186.474509ms to run NodePressure ...
	I0103 19:20:03.826908  102835 start.go:228] waiting for startup goroutines ...
	I0103 19:20:03.826924  102835 start.go:233] waiting for cluster config update ...
	I0103 19:20:03.826935  102835 start.go:242] writing updated cluster config ...
	I0103 19:20:03.829354  102835 out.go:177] 
	I0103 19:20:03.830758  102835 config.go:182] Loaded profile config "multinode-867906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:20:03.830837  102835 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/config.json ...
	I0103 19:20:03.832543  102835 out.go:177] * Starting worker node multinode-867906-m02 in cluster multinode-867906
	I0103 19:20:03.834237  102835 cache.go:121] Beginning downloading kic base image for docker with crio
	I0103 19:20:03.835627  102835 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0103 19:20:03.836874  102835 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 19:20:03.836899  102835 cache.go:56] Caching tarball of preloaded images
	I0103 19:20:03.837000  102835 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 19:20:03.837021  102835 preload.go:174] Found /home/jenkins/minikube-integration/17885-8915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0103 19:20:03.837033  102835 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0103 19:20:03.837119  102835 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/config.json ...
	I0103 19:20:03.853197  102835 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0103 19:20:03.853223  102835 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0103 19:20:03.853248  102835 cache.go:194] Successfully downloaded all kic artifacts
	I0103 19:20:03.853284  102835 start.go:365] acquiring machines lock for multinode-867906-m02: {Name:mk512214d5f98748dbc76a5381760fc01f057800 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:20:03.853418  102835 start.go:369] acquired machines lock for "multinode-867906-m02" in 110.345µs
	I0103 19:20:03.853450  102835 start.go:93] Provisioning new machine with config: &{Name:multinode-867906 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-867906 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0103 19:20:03.853545  102835 start.go:125] createHost starting for "m02" (driver="docker")
	I0103 19:20:03.855995  102835 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0103 19:20:03.856123  102835 start.go:159] libmachine.API.Create for "multinode-867906" (driver="docker")
	I0103 19:20:03.856150  102835 client.go:168] LocalClient.Create starting
	I0103 19:20:03.856234  102835 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem
	I0103 19:20:03.856278  102835 main.go:141] libmachine: Decoding PEM data...
	I0103 19:20:03.856298  102835 main.go:141] libmachine: Parsing certificate...
	I0103 19:20:03.856360  102835 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-8915/.minikube/certs/cert.pem
	I0103 19:20:03.856387  102835 main.go:141] libmachine: Decoding PEM data...
	I0103 19:20:03.856405  102835 main.go:141] libmachine: Parsing certificate...
	I0103 19:20:03.856670  102835 cli_runner.go:164] Run: docker network inspect multinode-867906 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0103 19:20:03.874049  102835 network_create.go:77] Found existing network {name:multinode-867906 subnet:0xc0030521e0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0103 19:20:03.874095  102835 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-867906-m02" container
	I0103 19:20:03.874168  102835 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0103 19:20:03.889496  102835 cli_runner.go:164] Run: docker volume create multinode-867906-m02 --label name.minikube.sigs.k8s.io=multinode-867906-m02 --label created_by.minikube.sigs.k8s.io=true
	I0103 19:20:03.907410  102835 oci.go:103] Successfully created a docker volume multinode-867906-m02
	I0103 19:20:03.907493  102835 cli_runner.go:164] Run: docker run --rm --name multinode-867906-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-867906-m02 --entrypoint /usr/bin/test -v multinode-867906-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I0103 19:20:04.373148  102835 oci.go:107] Successfully prepared a docker volume multinode-867906-m02
	I0103 19:20:04.373181  102835 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 19:20:04.373202  102835 kic.go:194] Starting extracting preloaded images to volume ...
	I0103 19:20:04.373272  102835 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17885-8915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-867906-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I0103 19:20:09.448338  102835 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17885-8915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-867906-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (5.075027634s)
	I0103 19:20:09.448368  102835 kic.go:203] duration metric: took 5.075165 seconds to extract preloaded images to volume
	W0103 19:20:09.448487  102835 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0103 19:20:09.448571  102835 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0103 19:20:09.502977  102835 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-867906-m02 --name multinode-867906-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-867906-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-867906-m02 --network multinode-867906 --ip 192.168.58.3 --volume multinode-867906-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0103 19:20:09.815473  102835 cli_runner.go:164] Run: docker container inspect multinode-867906-m02 --format={{.State.Running}}
	I0103 19:20:09.832218  102835 cli_runner.go:164] Run: docker container inspect multinode-867906-m02 --format={{.State.Status}}
	I0103 19:20:09.848802  102835 cli_runner.go:164] Run: docker exec multinode-867906-m02 stat /var/lib/dpkg/alternatives/iptables
	I0103 19:20:09.915457  102835 oci.go:144] the created container "multinode-867906-m02" has a running status.
	I0103 19:20:09.915484  102835 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17885-8915/.minikube/machines/multinode-867906-m02/id_rsa...
	I0103 19:20:10.061937  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/machines/multinode-867906-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0103 19:20:10.061977  102835 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17885-8915/.minikube/machines/multinode-867906-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0103 19:20:10.083113  102835 cli_runner.go:164] Run: docker container inspect multinode-867906-m02 --format={{.State.Status}}
	I0103 19:20:10.103423  102835 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0103 19:20:10.103445  102835 kic_runner.go:114] Args: [docker exec --privileged multinode-867906-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0103 19:20:10.170616  102835 cli_runner.go:164] Run: docker container inspect multinode-867906-m02 --format={{.State.Status}}
	I0103 19:20:10.187445  102835 machine.go:88] provisioning docker machine ...
	I0103 19:20:10.187486  102835 ubuntu.go:169] provisioning hostname "multinode-867906-m02"
	I0103 19:20:10.187543  102835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-867906-m02
	I0103 19:20:10.205702  102835 main.go:141] libmachine: Using SSH client type: native
	I0103 19:20:10.206270  102835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0103 19:20:10.206291  102835 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-867906-m02 && echo "multinode-867906-m02" | sudo tee /etc/hostname
	I0103 19:20:10.206977  102835 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58236->127.0.0.1:32852: read: connection reset by peer
	I0103 19:20:13.336781  102835 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-867906-m02
	
	I0103 19:20:13.336848  102835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-867906-m02
	I0103 19:20:13.353038  102835 main.go:141] libmachine: Using SSH client type: native
	I0103 19:20:13.353459  102835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0103 19:20:13.353489  102835 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-867906-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-867906-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-867906-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 19:20:13.470361  102835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 19:20:13.470391  102835 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17885-8915/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-8915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-8915/.minikube}
	I0103 19:20:13.470410  102835 ubuntu.go:177] setting up certificates
	I0103 19:20:13.470424  102835 provision.go:83] configureAuth start
	I0103 19:20:13.470471  102835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-867906-m02
	I0103 19:20:13.487037  102835 provision.go:138] copyHostCerts
	I0103 19:20:13.487083  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17885-8915/.minikube/cert.pem
	I0103 19:20:13.487109  102835 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-8915/.minikube/cert.pem, removing ...
	I0103 19:20:13.487118  102835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-8915/.minikube/cert.pem
	I0103 19:20:13.487183  102835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-8915/.minikube/cert.pem (1123 bytes)
	I0103 19:20:13.487253  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17885-8915/.minikube/key.pem
	I0103 19:20:13.487270  102835 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-8915/.minikube/key.pem, removing ...
	I0103 19:20:13.487274  102835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-8915/.minikube/key.pem
	I0103 19:20:13.487295  102835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-8915/.minikube/key.pem (1679 bytes)
	I0103 19:20:13.487337  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17885-8915/.minikube/ca.pem
	I0103 19:20:13.487353  102835 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-8915/.minikube/ca.pem, removing ...
	I0103 19:20:13.487358  102835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-8915/.minikube/ca.pem
	I0103 19:20:13.487384  102835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-8915/.minikube/ca.pem (1078 bytes)
	I0103 19:20:13.487428  102835 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-8915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca-key.pem org=jenkins.multinode-867906-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-867906-m02]
	I0103 19:20:13.634886  102835 provision.go:172] copyRemoteCerts
	I0103 19:20:13.634946  102835 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 19:20:13.634977  102835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-867906-m02
	I0103 19:20:13.651697  102835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/multinode-867906-m02/id_rsa Username:docker}
	I0103 19:20:13.738470  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0103 19:20:13.738527  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 19:20:13.760294  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0103 19:20:13.760359  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 19:20:13.781360  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0103 19:20:13.781424  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0103 19:20:13.802383  102835 provision.go:86] duration metric: configureAuth took 331.944493ms
	I0103 19:20:13.802414  102835 ubuntu.go:193] setting minikube options for container-runtime
	I0103 19:20:13.802603  102835 config.go:182] Loaded profile config "multinode-867906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:20:13.802694  102835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-867906-m02
	I0103 19:20:13.818732  102835 main.go:141] libmachine: Using SSH client type: native
	I0103 19:20:13.819076  102835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0103 19:20:13.819093  102835 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 19:20:14.020929  102835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 19:20:14.020953  102835 machine.go:91] provisioned docker machine in 3.833484924s
	I0103 19:20:14.020965  102835 client.go:171] LocalClient.Create took 10.164808397s
	I0103 19:20:14.020989  102835 start.go:167] duration metric: libmachine.API.Create for "multinode-867906" took 10.164866356s
	I0103 19:20:14.021001  102835 start.go:300] post-start starting for "multinode-867906-m02" (driver="docker")
	I0103 19:20:14.021015  102835 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 19:20:14.021098  102835 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 19:20:14.021145  102835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-867906-m02
	I0103 19:20:14.037226  102835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/multinode-867906-m02/id_rsa Username:docker}
	I0103 19:20:14.126728  102835 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 19:20:14.129671  102835 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0103 19:20:14.129687  102835 command_runner.go:130] > NAME="Ubuntu"
	I0103 19:20:14.129693  102835 command_runner.go:130] > VERSION_ID="22.04"
	I0103 19:20:14.129698  102835 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0103 19:20:14.129705  102835 command_runner.go:130] > VERSION_CODENAME=jammy
	I0103 19:20:14.129711  102835 command_runner.go:130] > ID=ubuntu
	I0103 19:20:14.129721  102835 command_runner.go:130] > ID_LIKE=debian
	I0103 19:20:14.129729  102835 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0103 19:20:14.129743  102835 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0103 19:20:14.129754  102835 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0103 19:20:14.129764  102835 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0103 19:20:14.129771  102835 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0103 19:20:14.129827  102835 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0103 19:20:14.129863  102835 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0103 19:20:14.129875  102835 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0103 19:20:14.129884  102835 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0103 19:20:14.129898  102835 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-8915/.minikube/addons for local assets ...
	I0103 19:20:14.129958  102835 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-8915/.minikube/files for local assets ...
	I0103 19:20:14.130044  102835 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/ssl/certs/156702.pem -> 156702.pem in /etc/ssl/certs
	I0103 19:20:14.130055  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/ssl/certs/156702.pem -> /etc/ssl/certs/156702.pem
	I0103 19:20:14.130181  102835 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 19:20:14.137584  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/ssl/certs/156702.pem --> /etc/ssl/certs/156702.pem (1708 bytes)
	I0103 19:20:14.158365  102835 start.go:303] post-start completed in 137.34642ms
	I0103 19:20:14.158742  102835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-867906-m02
	I0103 19:20:14.175487  102835 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/config.json ...
	I0103 19:20:14.175716  102835 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 19:20:14.175755  102835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-867906-m02
	I0103 19:20:14.191449  102835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/multinode-867906-m02/id_rsa Username:docker}
	I0103 19:20:14.278457  102835 command_runner.go:130] > 25%!
	(MISSING)I0103 19:20:14.278721  102835 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0103 19:20:14.282961  102835 command_runner.go:130] > 219G
	I0103 19:20:14.283003  102835 start.go:128] duration metric: createHost completed in 10.429444731s
	I0103 19:20:14.283014  102835 start.go:83] releasing machines lock for "multinode-867906-m02", held for 10.429583014s
	I0103 19:20:14.283074  102835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-867906-m02
	I0103 19:20:14.302049  102835 out.go:177] * Found network options:
	I0103 19:20:14.303709  102835 out.go:177]   - NO_PROXY=192.168.58.2
	W0103 19:20:14.305156  102835 proxy.go:119] fail to check proxy env: Error ip not in block
	W0103 19:20:14.305188  102835 proxy.go:119] fail to check proxy env: Error ip not in block
	I0103 19:20:14.305259  102835 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 19:20:14.305303  102835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-867906-m02
	I0103 19:20:14.305323  102835 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 19:20:14.305367  102835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-867906-m02
	I0103 19:20:14.321308  102835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/multinode-867906-m02/id_rsa Username:docker}
	I0103 19:20:14.321833  102835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/multinode-867906-m02/id_rsa Username:docker}
	I0103 19:20:14.537993  102835 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0103 19:20:14.538087  102835 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 19:20:14.541985  102835 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0103 19:20:14.542009  102835 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0103 19:20:14.542017  102835 command_runner.go:130] > Device: b0h/176d	Inode: 577599      Links: 1
	I0103 19:20:14.542027  102835 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0103 19:20:14.542043  102835 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0103 19:20:14.542051  102835 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0103 19:20:14.542064  102835 command_runner.go:130] > Change: 2024-01-03 18:59:21.794240194 +0000
	I0103 19:20:14.542073  102835 command_runner.go:130] >  Birth: 2024-01-03 18:59:21.794240194 +0000
	I0103 19:20:14.542232  102835 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 19:20:14.559627  102835 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0103 19:20:14.559708  102835 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 19:20:14.586811  102835 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0103 19:20:14.586871  102835 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0103 19:20:14.586878  102835 start.go:475] detecting cgroup driver to use...
	I0103 19:20:14.586905  102835 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 19:20:14.586952  102835 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 19:20:14.600368  102835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 19:20:14.610298  102835 docker.go:203] disabling cri-docker service (if available) ...
	I0103 19:20:14.610345  102835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 19:20:14.621943  102835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 19:20:14.634378  102835 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 19:20:14.706687  102835 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 19:20:14.720062  102835 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0103 19:20:14.782635  102835 docker.go:219] disabling docker service ...
	I0103 19:20:14.782696  102835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 19:20:14.799469  102835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 19:20:14.809300  102835 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 19:20:14.886485  102835 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0103 19:20:14.886558  102835 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 19:20:14.963563  102835 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0103 19:20:14.963637  102835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 19:20:14.973776  102835 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 19:20:14.987899  102835 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0103 19:20:14.987935  102835 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 19:20:14.987977  102835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:20:14.996692  102835 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 19:20:14.996743  102835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:20:15.005165  102835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:20:15.013480  102835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:20:15.022401  102835 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 19:20:15.030587  102835 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 19:20:15.037126  102835 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0103 19:20:15.037783  102835 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 19:20:15.045039  102835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 19:20:15.114995  102835 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 19:20:15.190541  102835 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 19:20:15.190665  102835 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 19:20:15.194046  102835 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0103 19:20:15.194070  102835 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0103 19:20:15.194080  102835 command_runner.go:130] > Device: b9h/185d	Inode: 190         Links: 1
	I0103 19:20:15.194089  102835 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0103 19:20:15.194096  102835 command_runner.go:130] > Access: 2024-01-03 19:20:15.175901932 +0000
	I0103 19:20:15.194106  102835 command_runner.go:130] > Modify: 2024-01-03 19:20:15.175901932 +0000
	I0103 19:20:15.194118  102835 command_runner.go:130] > Change: 2024-01-03 19:20:15.175901932 +0000
	I0103 19:20:15.194154  102835 command_runner.go:130] >  Birth: -
	I0103 19:20:15.194180  102835 start.go:543] Will wait 60s for crictl version
	I0103 19:20:15.194219  102835 ssh_runner.go:195] Run: which crictl
	I0103 19:20:15.197262  102835 command_runner.go:130] > /usr/bin/crictl
	I0103 19:20:15.197348  102835 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 19:20:15.227812  102835 command_runner.go:130] > Version:  0.1.0
	I0103 19:20:15.227844  102835 command_runner.go:130] > RuntimeName:  cri-o
	I0103 19:20:15.227849  102835 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0103 19:20:15.227854  102835 command_runner.go:130] > RuntimeApiVersion:  v1
	I0103 19:20:15.227869  102835 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0103 19:20:15.227920  102835 ssh_runner.go:195] Run: crio --version
	I0103 19:20:15.260751  102835 command_runner.go:130] > crio version 1.24.6
	I0103 19:20:15.260777  102835 command_runner.go:130] > Version:          1.24.6
	I0103 19:20:15.260789  102835 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0103 19:20:15.260793  102835 command_runner.go:130] > GitTreeState:     clean
	I0103 19:20:15.260800  102835 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0103 19:20:15.260804  102835 command_runner.go:130] > GoVersion:        go1.18.2
	I0103 19:20:15.260814  102835 command_runner.go:130] > Compiler:         gc
	I0103 19:20:15.260819  102835 command_runner.go:130] > Platform:         linux/amd64
	I0103 19:20:15.260825  102835 command_runner.go:130] > Linkmode:         dynamic
	I0103 19:20:15.260837  102835 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0103 19:20:15.260842  102835 command_runner.go:130] > SeccompEnabled:   true
	I0103 19:20:15.260849  102835 command_runner.go:130] > AppArmorEnabled:  false
	I0103 19:20:15.260920  102835 ssh_runner.go:195] Run: crio --version
	I0103 19:20:15.292757  102835 command_runner.go:130] > crio version 1.24.6
	I0103 19:20:15.292779  102835 command_runner.go:130] > Version:          1.24.6
	I0103 19:20:15.292786  102835 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0103 19:20:15.292790  102835 command_runner.go:130] > GitTreeState:     clean
	I0103 19:20:15.292797  102835 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0103 19:20:15.292801  102835 command_runner.go:130] > GoVersion:        go1.18.2
	I0103 19:20:15.292805  102835 command_runner.go:130] > Compiler:         gc
	I0103 19:20:15.292809  102835 command_runner.go:130] > Platform:         linux/amd64
	I0103 19:20:15.292814  102835 command_runner.go:130] > Linkmode:         dynamic
	I0103 19:20:15.292821  102835 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0103 19:20:15.292826  102835 command_runner.go:130] > SeccompEnabled:   true
	I0103 19:20:15.292830  102835 command_runner.go:130] > AppArmorEnabled:  false
	I0103 19:20:15.295860  102835 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0103 19:20:15.297382  102835 out.go:177]   - env NO_PROXY=192.168.58.2
	I0103 19:20:15.298766  102835 cli_runner.go:164] Run: docker network inspect multinode-867906 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0103 19:20:15.313987  102835 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0103 19:20:15.317369  102835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 19:20:15.326926  102835 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906 for IP: 192.168.58.3
	I0103 19:20:15.326952  102835 certs.go:190] acquiring lock for shared ca certs: {Name:mk5aa238e4284ee43cf20f760a8d5a161bd1dece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:20:15.327076  102835 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-8915/.minikube/ca.key
	I0103 19:20:15.327112  102835 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-8915/.minikube/proxy-client-ca.key
	I0103 19:20:15.327123  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0103 19:20:15.327137  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0103 19:20:15.327149  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0103 19:20:15.327161  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0103 19:20:15.327209  102835 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/home/jenkins/minikube-integration/17885-8915/.minikube/certs/15670.pem (1338 bytes)
	W0103 19:20:15.327236  102835 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-8915/.minikube/certs/home/jenkins/minikube-integration/17885-8915/.minikube/certs/15670_empty.pem, impossibly tiny 0 bytes
	I0103 19:20:15.327248  102835 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca-key.pem (1675 bytes)
	I0103 19:20:15.327280  102835 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem (1078 bytes)
	I0103 19:20:15.327304  102835 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/home/jenkins/minikube-integration/17885-8915/.minikube/certs/cert.pem (1123 bytes)
	I0103 19:20:15.327326  102835 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/home/jenkins/minikube-integration/17885-8915/.minikube/certs/key.pem (1679 bytes)
	I0103 19:20:15.327363  102835 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/ssl/certs/156702.pem (1708 bytes)
	I0103 19:20:15.327387  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/15670.pem -> /usr/share/ca-certificates/15670.pem
	I0103 19:20:15.327412  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/ssl/certs/156702.pem -> /usr/share/ca-certificates/156702.pem
	I0103 19:20:15.327424  102835 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:20:15.327768  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 19:20:15.348539  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0103 19:20:15.369002  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 19:20:15.390255  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0103 19:20:15.411200  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/certs/15670.pem --> /usr/share/ca-certificates/15670.pem (1338 bytes)
	I0103 19:20:15.431574  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/ssl/certs/156702.pem --> /usr/share/ca-certificates/156702.pem (1708 bytes)
	I0103 19:20:15.452186  102835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 19:20:15.473656  102835 ssh_runner.go:195] Run: openssl version
	I0103 19:20:15.478475  102835 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0103 19:20:15.478552  102835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15670.pem && ln -fs /usr/share/ca-certificates/15670.pem /etc/ssl/certs/15670.pem"
	I0103 19:20:15.486956  102835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15670.pem
	I0103 19:20:15.489875  102835 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  3 19:05 /usr/share/ca-certificates/15670.pem
	I0103 19:20:15.489907  102835 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 19:05 /usr/share/ca-certificates/15670.pem
	I0103 19:20:15.489940  102835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15670.pem
	I0103 19:20:15.495922  102835 command_runner.go:130] > 51391683
	I0103 19:20:15.495985  102835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15670.pem /etc/ssl/certs/51391683.0"
	I0103 19:20:15.503881  102835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/156702.pem && ln -fs /usr/share/ca-certificates/156702.pem /etc/ssl/certs/156702.pem"
	I0103 19:20:15.512039  102835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/156702.pem
	I0103 19:20:15.514979  102835 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  3 19:05 /usr/share/ca-certificates/156702.pem
	I0103 19:20:15.515015  102835 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 19:05 /usr/share/ca-certificates/156702.pem
	I0103 19:20:15.515054  102835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/156702.pem
	I0103 19:20:15.520744  102835 command_runner.go:130] > 3ec20f2e
	I0103 19:20:15.520971  102835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/156702.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 19:20:15.529018  102835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 19:20:15.536855  102835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:20:15.539893  102835 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  3 18:59 /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:20:15.539927  102835 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 18:59 /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:20:15.539962  102835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:20:15.546060  102835 command_runner.go:130] > b5213941
	I0103 19:20:15.546124  102835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 19:20:15.554323  102835 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 19:20:15.557087  102835 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 19:20:15.557114  102835 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 19:20:15.557179  102835 ssh_runner.go:195] Run: crio config
	I0103 19:20:15.590699  102835 command_runner.go:130] ! time="2024-01-03 19:20:15.590366966Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0103 19:20:15.590724  102835 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0103 19:20:15.596259  102835 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0103 19:20:15.596277  102835 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0103 19:20:15.596284  102835 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0103 19:20:15.596287  102835 command_runner.go:130] > #
	I0103 19:20:15.596294  102835 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0103 19:20:15.596300  102835 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0103 19:20:15.596305  102835 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0103 19:20:15.596314  102835 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0103 19:20:15.596318  102835 command_runner.go:130] > # reload'.
	I0103 19:20:15.596326  102835 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0103 19:20:15.596332  102835 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0103 19:20:15.596338  102835 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0103 19:20:15.596345  102835 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0103 19:20:15.596351  102835 command_runner.go:130] > [crio]
	I0103 19:20:15.596357  102835 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0103 19:20:15.596362  102835 command_runner.go:130] > # containers images, in this directory.
	I0103 19:20:15.596372  102835 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0103 19:20:15.596381  102835 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0103 19:20:15.596387  102835 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0103 19:20:15.596398  102835 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0103 19:20:15.596406  102835 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0103 19:20:15.596411  102835 command_runner.go:130] > # storage_driver = "vfs"
	I0103 19:20:15.596419  102835 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0103 19:20:15.596425  102835 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0103 19:20:15.596429  102835 command_runner.go:130] > # storage_option = [
	I0103 19:20:15.596432  102835 command_runner.go:130] > # ]
	I0103 19:20:15.596438  102835 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0103 19:20:15.596444  102835 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0103 19:20:15.596449  102835 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0103 19:20:15.596457  102835 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0103 19:20:15.596463  102835 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0103 19:20:15.596470  102835 command_runner.go:130] > # always happen on a node reboot
	I0103 19:20:15.596475  102835 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0103 19:20:15.596482  102835 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0103 19:20:15.596489  102835 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0103 19:20:15.596500  102835 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0103 19:20:15.596505  102835 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0103 19:20:15.596513  102835 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0103 19:20:15.596523  102835 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0103 19:20:15.596528  102835 command_runner.go:130] > # internal_wipe = true
	I0103 19:20:15.596536  102835 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0103 19:20:15.596543  102835 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0103 19:20:15.596550  102835 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0103 19:20:15.596556  102835 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0103 19:20:15.596563  102835 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0103 19:20:15.596567  102835 command_runner.go:130] > [crio.api]
	I0103 19:20:15.596574  102835 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0103 19:20:15.596581  102835 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0103 19:20:15.596586  102835 command_runner.go:130] > # IP address on which the stream server will listen.
	I0103 19:20:15.596592  102835 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0103 19:20:15.596598  102835 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0103 19:20:15.596606  102835 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0103 19:20:15.596610  102835 command_runner.go:130] > # stream_port = "0"
	I0103 19:20:15.596615  102835 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0103 19:20:15.596619  102835 command_runner.go:130] > # stream_enable_tls = false
	I0103 19:20:15.596626  102835 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0103 19:20:15.596633  102835 command_runner.go:130] > # stream_idle_timeout = ""
	I0103 19:20:15.596639  102835 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0103 19:20:15.596647  102835 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0103 19:20:15.596651  102835 command_runner.go:130] > # minutes.
	I0103 19:20:15.596655  102835 command_runner.go:130] > # stream_tls_cert = ""
	I0103 19:20:15.596663  102835 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0103 19:20:15.596671  102835 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0103 19:20:15.596678  102835 command_runner.go:130] > # stream_tls_key = ""
	I0103 19:20:15.596684  102835 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0103 19:20:15.596692  102835 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0103 19:20:15.596697  102835 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0103 19:20:15.596703  102835 command_runner.go:130] > # stream_tls_ca = ""
	I0103 19:20:15.596711  102835 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0103 19:20:15.596715  102835 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0103 19:20:15.596724  102835 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0103 19:20:15.596729  102835 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0103 19:20:15.596748  102835 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0103 19:20:15.596756  102835 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0103 19:20:15.596761  102835 command_runner.go:130] > [crio.runtime]
	I0103 19:20:15.596767  102835 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0103 19:20:15.596776  102835 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0103 19:20:15.596780  102835 command_runner.go:130] > # "nofile=1024:2048"
	I0103 19:20:15.596787  102835 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0103 19:20:15.596793  102835 command_runner.go:130] > # default_ulimits = [
	I0103 19:20:15.596797  102835 command_runner.go:130] > # ]
	I0103 19:20:15.596805  102835 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0103 19:20:15.596809  102835 command_runner.go:130] > # no_pivot = false
	I0103 19:20:15.596815  102835 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0103 19:20:15.596823  102835 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0103 19:20:15.596828  102835 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0103 19:20:15.596836  102835 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0103 19:20:15.596842  102835 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0103 19:20:15.596850  102835 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0103 19:20:15.596854  102835 command_runner.go:130] > # conmon = ""
	I0103 19:20:15.596858  102835 command_runner.go:130] > # Cgroup setting for conmon
	I0103 19:20:15.596868  102835 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0103 19:20:15.596873  102835 command_runner.go:130] > conmon_cgroup = "pod"
	I0103 19:20:15.596880  102835 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0103 19:20:15.596887  102835 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0103 19:20:15.596894  102835 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0103 19:20:15.596900  102835 command_runner.go:130] > # conmon_env = [
	I0103 19:20:15.596904  102835 command_runner.go:130] > # ]
	I0103 19:20:15.596909  102835 command_runner.go:130] > # Additional environment variables to set for all the
	I0103 19:20:15.596916  102835 command_runner.go:130] > # containers. These are overridden if set in the
	I0103 19:20:15.596922  102835 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0103 19:20:15.596928  102835 command_runner.go:130] > # default_env = [
	I0103 19:20:15.596931  102835 command_runner.go:130] > # ]
	I0103 19:20:15.596937  102835 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0103 19:20:15.596943  102835 command_runner.go:130] > # selinux = false
	I0103 19:20:15.596950  102835 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0103 19:20:15.596958  102835 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0103 19:20:15.596963  102835 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0103 19:20:15.596969  102835 command_runner.go:130] > # seccomp_profile = ""
	I0103 19:20:15.596989  102835 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0103 19:20:15.596999  102835 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0103 19:20:15.597005  102835 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0103 19:20:15.597009  102835 command_runner.go:130] > # which might increase security.
	I0103 19:20:15.597014  102835 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0103 19:20:15.597020  102835 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0103 19:20:15.597029  102835 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0103 19:20:15.597035  102835 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0103 19:20:15.597043  102835 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0103 19:20:15.597048  102835 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:20:15.597055  102835 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0103 19:20:15.597061  102835 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0103 19:20:15.597068  102835 command_runner.go:130] > # the cgroup blockio controller.
	I0103 19:20:15.597072  102835 command_runner.go:130] > # blockio_config_file = ""
	I0103 19:20:15.597081  102835 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0103 19:20:15.597087  102835 command_runner.go:130] > # irqbalance daemon.
	I0103 19:20:15.597092  102835 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0103 19:20:15.597101  102835 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0103 19:20:15.597120  102835 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:20:15.597127  102835 command_runner.go:130] > # rdt_config_file = ""
	I0103 19:20:15.597132  102835 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0103 19:20:15.597139  102835 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0103 19:20:15.597145  102835 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0103 19:20:15.597152  102835 command_runner.go:130] > # separate_pull_cgroup = ""
	I0103 19:20:15.597158  102835 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0103 19:20:15.597166  102835 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0103 19:20:15.597170  102835 command_runner.go:130] > # will be added.
	I0103 19:20:15.597175  102835 command_runner.go:130] > # default_capabilities = [
	I0103 19:20:15.597179  102835 command_runner.go:130] > # 	"CHOWN",
	I0103 19:20:15.597183  102835 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0103 19:20:15.597187  102835 command_runner.go:130] > # 	"FSETID",
	I0103 19:20:15.597193  102835 command_runner.go:130] > # 	"FOWNER",
	I0103 19:20:15.597197  102835 command_runner.go:130] > # 	"SETGID",
	I0103 19:20:15.597200  102835 command_runner.go:130] > # 	"SETUID",
	I0103 19:20:15.597206  102835 command_runner.go:130] > # 	"SETPCAP",
	I0103 19:20:15.597211  102835 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0103 19:20:15.597217  102835 command_runner.go:130] > # 	"KILL",
	I0103 19:20:15.597220  102835 command_runner.go:130] > # ]
	I0103 19:20:15.597228  102835 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0103 19:20:15.597237  102835 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0103 19:20:15.597242  102835 command_runner.go:130] > # add_inheritable_capabilities = true
	I0103 19:20:15.597250  102835 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0103 19:20:15.597256  102835 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0103 19:20:15.597263  102835 command_runner.go:130] > # default_sysctls = [
	I0103 19:20:15.597266  102835 command_runner.go:130] > # ]
	I0103 19:20:15.597273  102835 command_runner.go:130] > # List of devices on the host that a
	I0103 19:20:15.597279  102835 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0103 19:20:15.597285  102835 command_runner.go:130] > # allowed_devices = [
	I0103 19:20:15.597289  102835 command_runner.go:130] > # 	"/dev/fuse",
	I0103 19:20:15.597293  102835 command_runner.go:130] > # ]
	I0103 19:20:15.597298  102835 command_runner.go:130] > # List of additional devices. specified as
	I0103 19:20:15.597339  102835 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0103 19:20:15.597350  102835 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0103 19:20:15.597356  102835 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0103 19:20:15.597361  102835 command_runner.go:130] > # additional_devices = [
	I0103 19:20:15.597367  102835 command_runner.go:130] > # ]
	I0103 19:20:15.597372  102835 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0103 19:20:15.597378  102835 command_runner.go:130] > # cdi_spec_dirs = [
	I0103 19:20:15.597382  102835 command_runner.go:130] > # 	"/etc/cdi",
	I0103 19:20:15.597386  102835 command_runner.go:130] > # 	"/var/run/cdi",
	I0103 19:20:15.597390  102835 command_runner.go:130] > # ]
	I0103 19:20:15.597396  102835 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0103 19:20:15.597404  102835 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0103 19:20:15.597409  102835 command_runner.go:130] > # Defaults to false.
	I0103 19:20:15.597414  102835 command_runner.go:130] > # device_ownership_from_security_context = false
	I0103 19:20:15.597420  102835 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0103 19:20:15.597429  102835 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0103 19:20:15.597433  102835 command_runner.go:130] > # hooks_dir = [
	I0103 19:20:15.597437  102835 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0103 19:20:15.597443  102835 command_runner.go:130] > # ]
	I0103 19:20:15.597449  102835 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0103 19:20:15.597457  102835 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0103 19:20:15.597462  102835 command_runner.go:130] > # its default mounts from the following two files:
	I0103 19:20:15.597468  102835 command_runner.go:130] > #
	I0103 19:20:15.597474  102835 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0103 19:20:15.597483  102835 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0103 19:20:15.597488  102835 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0103 19:20:15.597494  102835 command_runner.go:130] > #
	I0103 19:20:15.597500  102835 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0103 19:20:15.597508  102835 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0103 19:20:15.597515  102835 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0103 19:20:15.597522  102835 command_runner.go:130] > #      only add mounts it finds in this file.
	I0103 19:20:15.597525  102835 command_runner.go:130] > #
	I0103 19:20:15.597532  102835 command_runner.go:130] > # default_mounts_file = ""
	I0103 19:20:15.597537  102835 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0103 19:20:15.597545  102835 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0103 19:20:15.597549  102835 command_runner.go:130] > # pids_limit = 0
	I0103 19:20:15.597556  102835 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0103 19:20:15.597564  102835 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0103 19:20:15.597572  102835 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0103 19:20:15.597582  102835 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0103 19:20:15.597586  102835 command_runner.go:130] > # log_size_max = -1
	I0103 19:20:15.597595  102835 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0103 19:20:15.597604  102835 command_runner.go:130] > # log_to_journald = false
	I0103 19:20:15.597610  102835 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0103 19:20:15.597617  102835 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0103 19:20:15.597622  102835 command_runner.go:130] > # Path to directory for container attach sockets.
	I0103 19:20:15.597629  102835 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0103 19:20:15.597635  102835 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0103 19:20:15.597642  102835 command_runner.go:130] > # bind_mount_prefix = ""
	I0103 19:20:15.597647  102835 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0103 19:20:15.597653  102835 command_runner.go:130] > # read_only = false
	I0103 19:20:15.597659  102835 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0103 19:20:15.597667  102835 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0103 19:20:15.597672  102835 command_runner.go:130] > # live configuration reload.
	I0103 19:20:15.597678  102835 command_runner.go:130] > # log_level = "info"
	I0103 19:20:15.597684  102835 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0103 19:20:15.597697  102835 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:20:15.597703  102835 command_runner.go:130] > # log_filter = ""
	I0103 19:20:15.597709  102835 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0103 19:20:15.597717  102835 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0103 19:20:15.597722  102835 command_runner.go:130] > # separated by comma.
	I0103 19:20:15.597728  102835 command_runner.go:130] > # uid_mappings = ""
	I0103 19:20:15.597734  102835 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0103 19:20:15.597743  102835 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0103 19:20:15.597747  102835 command_runner.go:130] > # separated by comma.
	I0103 19:20:15.597751  102835 command_runner.go:130] > # gid_mappings = ""
	I0103 19:20:15.597759  102835 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0103 19:20:15.597765  102835 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0103 19:20:15.597774  102835 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0103 19:20:15.597778  102835 command_runner.go:130] > # minimum_mappable_uid = -1
	I0103 19:20:15.597786  102835 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0103 19:20:15.597794  102835 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0103 19:20:15.597802  102835 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0103 19:20:15.597806  102835 command_runner.go:130] > # minimum_mappable_gid = -1
	I0103 19:20:15.597814  102835 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0103 19:20:15.597821  102835 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0103 19:20:15.597829  102835 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0103 19:20:15.597835  102835 command_runner.go:130] > # ctr_stop_timeout = 30
	I0103 19:20:15.597843  102835 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0103 19:20:15.597851  102835 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0103 19:20:15.597857  102835 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0103 19:20:15.597862  102835 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0103 19:20:15.597869  102835 command_runner.go:130] > # drop_infra_ctr = true
	I0103 19:20:15.597875  102835 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0103 19:20:15.597882  102835 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0103 19:20:15.597890  102835 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0103 19:20:15.597897  102835 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0103 19:20:15.597902  102835 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0103 19:20:15.597909  102835 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0103 19:20:15.597914  102835 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0103 19:20:15.597923  102835 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0103 19:20:15.597927  102835 command_runner.go:130] > # pinns_path = ""
	I0103 19:20:15.597933  102835 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0103 19:20:15.597942  102835 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0103 19:20:15.597948  102835 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0103 19:20:15.597955  102835 command_runner.go:130] > # default_runtime = "runc"
	I0103 19:20:15.597960  102835 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0103 19:20:15.597969  102835 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0103 19:20:15.597981  102835 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0103 19:20:15.597989  102835 command_runner.go:130] > # creation as a file is not desired either.
	I0103 19:20:15.597997  102835 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0103 19:20:15.598004  102835 command_runner.go:130] > # the hostname is being managed dynamically.
	I0103 19:20:15.598009  102835 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0103 19:20:15.598014  102835 command_runner.go:130] > # ]
	I0103 19:20:15.598020  102835 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0103 19:20:15.598028  102835 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0103 19:20:15.598034  102835 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0103 19:20:15.598043  102835 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0103 19:20:15.598046  102835 command_runner.go:130] > #
	I0103 19:20:15.598054  102835 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0103 19:20:15.598059  102835 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0103 19:20:15.598065  102835 command_runner.go:130] > #  runtime_type = "oci"
	I0103 19:20:15.598070  102835 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0103 19:20:15.598077  102835 command_runner.go:130] > #  privileged_without_host_devices = false
	I0103 19:20:15.598083  102835 command_runner.go:130] > #  allowed_annotations = []
	I0103 19:20:15.598089  102835 command_runner.go:130] > # Where:
	I0103 19:20:15.598094  102835 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0103 19:20:15.598103  102835 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0103 19:20:15.598111  102835 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0103 19:20:15.598117  102835 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0103 19:20:15.598123  102835 command_runner.go:130] > #   in $PATH.
	I0103 19:20:15.598129  102835 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0103 19:20:15.598153  102835 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0103 19:20:15.598167  102835 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0103 19:20:15.598173  102835 command_runner.go:130] > #   state.
	I0103 19:20:15.598182  102835 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0103 19:20:15.598194  102835 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0103 19:20:15.598203  102835 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0103 19:20:15.598209  102835 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0103 19:20:15.598217  102835 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0103 19:20:15.598227  102835 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0103 19:20:15.598232  102835 command_runner.go:130] > #   The currently recognized values are:
	I0103 19:20:15.598241  102835 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0103 19:20:15.598248  102835 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0103 19:20:15.598256  102835 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0103 19:20:15.598262  102835 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0103 19:20:15.598271  102835 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0103 19:20:15.598280  102835 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0103 19:20:15.598288  102835 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0103 19:20:15.598295  102835 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0103 19:20:15.598302  102835 command_runner.go:130] > #   should be moved to the container's cgroup
	I0103 19:20:15.598307  102835 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0103 19:20:15.598313  102835 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0103 19:20:15.598319  102835 command_runner.go:130] > runtime_type = "oci"
	I0103 19:20:15.598323  102835 command_runner.go:130] > runtime_root = "/run/runc"
	I0103 19:20:15.598330  102835 command_runner.go:130] > runtime_config_path = ""
	I0103 19:20:15.598334  102835 command_runner.go:130] > monitor_path = ""
	I0103 19:20:15.598341  102835 command_runner.go:130] > monitor_cgroup = ""
	I0103 19:20:15.598345  102835 command_runner.go:130] > monitor_exec_cgroup = ""
	I0103 19:20:15.598370  102835 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0103 19:20:15.598377  102835 command_runner.go:130] > # running containers
	I0103 19:20:15.598381  102835 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0103 19:20:15.598389  102835 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0103 19:20:15.598398  102835 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0103 19:20:15.598404  102835 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0103 19:20:15.598410  102835 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0103 19:20:15.598415  102835 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0103 19:20:15.598422  102835 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0103 19:20:15.598426  102835 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0103 19:20:15.598432  102835 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0103 19:20:15.598438  102835 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0103 19:20:15.598444  102835 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0103 19:20:15.598450  102835 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0103 19:20:15.598458  102835 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0103 19:20:15.598466  102835 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0103 19:20:15.598475  102835 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0103 19:20:15.598482  102835 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0103 19:20:15.598492  102835 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0103 19:20:15.598502  102835 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0103 19:20:15.598507  102835 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0103 19:20:15.598517  102835 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0103 19:20:15.598523  102835 command_runner.go:130] > # Example:
	I0103 19:20:15.598528  102835 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0103 19:20:15.598533  102835 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0103 19:20:15.598538  102835 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0103 19:20:15.598545  102835 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0103 19:20:15.598549  102835 command_runner.go:130] > # cpuset = 0
	I0103 19:20:15.598554  102835 command_runner.go:130] > # cpushares = "0-1"
	I0103 19:20:15.598560  102835 command_runner.go:130] > # Where:
	I0103 19:20:15.598564  102835 command_runner.go:130] > # The workload name is workload-type.
	I0103 19:20:15.598574  102835 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0103 19:20:15.598579  102835 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0103 19:20:15.598587  102835 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0103 19:20:15.598595  102835 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0103 19:20:15.598603  102835 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0103 19:20:15.598609  102835 command_runner.go:130] > # 
	I0103 19:20:15.598615  102835 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0103 19:20:15.598621  102835 command_runner.go:130] > #
	I0103 19:20:15.598627  102835 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0103 19:20:15.598634  102835 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0103 19:20:15.598642  102835 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0103 19:20:15.598649  102835 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0103 19:20:15.598656  102835 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0103 19:20:15.598660  102835 command_runner.go:130] > [crio.image]
	I0103 19:20:15.598669  102835 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0103 19:20:15.598674  102835 command_runner.go:130] > # default_transport = "docker://"
	I0103 19:20:15.598683  102835 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0103 19:20:15.598689  102835 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0103 19:20:15.598695  102835 command_runner.go:130] > # global_auth_file = ""
	I0103 19:20:15.598700  102835 command_runner.go:130] > # The image used to instantiate infra containers.
	I0103 19:20:15.598707  102835 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:20:15.598712  102835 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0103 19:20:15.598721  102835 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0103 19:20:15.598727  102835 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0103 19:20:15.598735  102835 command_runner.go:130] > # This option supports live configuration reload.
	I0103 19:20:15.598739  102835 command_runner.go:130] > # pause_image_auth_file = ""
	I0103 19:20:15.598747  102835 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0103 19:20:15.598753  102835 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0103 19:20:15.598762  102835 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0103 19:20:15.598768  102835 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0103 19:20:15.598774  102835 command_runner.go:130] > # pause_command = "/pause"
	I0103 19:20:15.598780  102835 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0103 19:20:15.598788  102835 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0103 19:20:15.598795  102835 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0103 19:20:15.598803  102835 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0103 19:20:15.598808  102835 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0103 19:20:15.598814  102835 command_runner.go:130] > # signature_policy = ""
	I0103 19:20:15.598823  102835 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0103 19:20:15.598832  102835 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0103 19:20:15.598837  102835 command_runner.go:130] > # changing them here.
	I0103 19:20:15.598844  102835 command_runner.go:130] > # insecure_registries = [
	I0103 19:20:15.598847  102835 command_runner.go:130] > # ]
	I0103 19:20:15.598856  102835 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0103 19:20:15.598861  102835 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0103 19:20:15.598873  102835 command_runner.go:130] > # image_volumes = "mkdir"
	I0103 19:20:15.598879  102835 command_runner.go:130] > # Temporary directory to use for storing big files
	I0103 19:20:15.598885  102835 command_runner.go:130] > # big_files_temporary_dir = ""
	I0103 19:20:15.598891  102835 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0103 19:20:15.598897  102835 command_runner.go:130] > # CNI plugins.
	I0103 19:20:15.598901  102835 command_runner.go:130] > [crio.network]
	I0103 19:20:15.598907  102835 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0103 19:20:15.598915  102835 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0103 19:20:15.598919  102835 command_runner.go:130] > # cni_default_network = ""
	I0103 19:20:15.598925  102835 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0103 19:20:15.598933  102835 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0103 19:20:15.598939  102835 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0103 19:20:15.598945  102835 command_runner.go:130] > # plugin_dirs = [
	I0103 19:20:15.598949  102835 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0103 19:20:15.598954  102835 command_runner.go:130] > # ]
	I0103 19:20:15.598960  102835 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0103 19:20:15.598966  102835 command_runner.go:130] > [crio.metrics]
	I0103 19:20:15.598971  102835 command_runner.go:130] > # Globally enable or disable metrics support.
	I0103 19:20:15.598981  102835 command_runner.go:130] > # enable_metrics = false
	I0103 19:20:15.598987  102835 command_runner.go:130] > # Specify enabled metrics collectors.
	I0103 19:20:15.598992  102835 command_runner.go:130] > # Per default all metrics are enabled.
	I0103 19:20:15.598999  102835 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0103 19:20:15.599007  102835 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0103 19:20:15.599013  102835 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0103 19:20:15.599020  102835 command_runner.go:130] > # metrics_collectors = [
	I0103 19:20:15.599024  102835 command_runner.go:130] > # 	"operations",
	I0103 19:20:15.599031  102835 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0103 19:20:15.599036  102835 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0103 19:20:15.599040  102835 command_runner.go:130] > # 	"operations_errors",
	I0103 19:20:15.599046  102835 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0103 19:20:15.599051  102835 command_runner.go:130] > # 	"image_pulls_by_name",
	I0103 19:20:15.599059  102835 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0103 19:20:15.599063  102835 command_runner.go:130] > # 	"image_pulls_failures",
	I0103 19:20:15.599070  102835 command_runner.go:130] > # 	"image_pulls_successes",
	I0103 19:20:15.599075  102835 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0103 19:20:15.599081  102835 command_runner.go:130] > # 	"image_layer_reuse",
	I0103 19:20:15.599085  102835 command_runner.go:130] > # 	"containers_oom_total",
	I0103 19:20:15.599091  102835 command_runner.go:130] > # 	"containers_oom",
	I0103 19:20:15.599096  102835 command_runner.go:130] > # 	"processes_defunct",
	I0103 19:20:15.599102  102835 command_runner.go:130] > # 	"operations_total",
	I0103 19:20:15.599107  102835 command_runner.go:130] > # 	"operations_latency_seconds",
	I0103 19:20:15.599113  102835 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0103 19:20:15.599118  102835 command_runner.go:130] > # 	"operations_errors_total",
	I0103 19:20:15.599125  102835 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0103 19:20:15.599129  102835 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0103 19:20:15.599133  102835 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0103 19:20:15.599139  102835 command_runner.go:130] > # 	"image_pulls_success_total",
	I0103 19:20:15.599144  102835 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0103 19:20:15.599151  102835 command_runner.go:130] > # 	"containers_oom_count_total",
	I0103 19:20:15.599154  102835 command_runner.go:130] > # ]
	I0103 19:20:15.599162  102835 command_runner.go:130] > # The port on which the metrics server will listen.
	I0103 19:20:15.599166  102835 command_runner.go:130] > # metrics_port = 9090
	I0103 19:20:15.599173  102835 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0103 19:20:15.599177  102835 command_runner.go:130] > # metrics_socket = ""
	I0103 19:20:15.599183  102835 command_runner.go:130] > # The certificate for the secure metrics server.
	I0103 19:20:15.599189  102835 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0103 19:20:15.599197  102835 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0103 19:20:15.599202  102835 command_runner.go:130] > # certificate on any modification event.
	I0103 19:20:15.599209  102835 command_runner.go:130] > # metrics_cert = ""
	I0103 19:20:15.599214  102835 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0103 19:20:15.599221  102835 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0103 19:20:15.599225  102835 command_runner.go:130] > # metrics_key = ""
	I0103 19:20:15.599231  102835 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0103 19:20:15.599236  102835 command_runner.go:130] > [crio.tracing]
	I0103 19:20:15.599242  102835 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0103 19:20:15.599248  102835 command_runner.go:130] > # enable_tracing = false
	I0103 19:20:15.599253  102835 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0103 19:20:15.599261  102835 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0103 19:20:15.599266  102835 command_runner.go:130] > # Number of samples to collect per million spans.
	I0103 19:20:15.599274  102835 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0103 19:20:15.599280  102835 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0103 19:20:15.599285  102835 command_runner.go:130] > [crio.stats]
	I0103 19:20:15.599291  102835 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0103 19:20:15.599299  102835 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0103 19:20:15.599305  102835 command_runner.go:130] > # stats_collection_period = 0
	I0103 19:20:15.599374  102835 cni.go:84] Creating CNI manager for ""
	I0103 19:20:15.599384  102835 cni.go:136] 2 nodes found, recommending kindnet
	I0103 19:20:15.599393  102835 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 19:20:15.599411  102835 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-867906 NodeName:multinode-867906-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 19:20:15.599527  102835 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-867906-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 19:20:15.599579  102835 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-867906-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-867906 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 19:20:15.599651  102835 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 19:20:15.607531  102835 command_runner.go:130] > kubeadm
	I0103 19:20:15.607550  102835 command_runner.go:130] > kubectl
	I0103 19:20:15.607561  102835 command_runner.go:130] > kubelet
	I0103 19:20:15.607587  102835 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 19:20:15.607637  102835 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0103 19:20:15.615152  102835 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0103 19:20:15.630225  102835 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 19:20:15.645353  102835 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0103 19:20:15.648338  102835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 19:20:15.657962  102835 host.go:66] Checking if "multinode-867906" exists ...
	I0103 19:20:15.658205  102835 config.go:182] Loaded profile config "multinode-867906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:20:15.658244  102835 start.go:304] JoinCluster: &{Name:multinode-867906 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-867906 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:20:15.658343  102835 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0103 19:20:15.658386  102835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-867906
	I0103 19:20:15.674994  102835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/multinode-867906/id_rsa Username:docker}
	I0103 19:20:15.810486  102835 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ftz7ad.peqnrkxfemamf665 --discovery-token-ca-cert-hash sha256:6fada62843f74bbefe3d0de7ede254ffc49257c2300ab57b02a33590c9b388a1 
	I0103 19:20:15.814379  102835 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0103 19:20:15.814437  102835 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ftz7ad.peqnrkxfemamf665 --discovery-token-ca-cert-hash sha256:6fada62843f74bbefe3d0de7ede254ffc49257c2300ab57b02a33590c9b388a1 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-867906-m02"
	I0103 19:20:15.847411  102835 command_runner.go:130] ! W0103 19:20:15.846953    1108 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0103 19:20:15.874893  102835 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I0103 19:20:15.942022  102835 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 19:20:18.076008  102835 command_runner.go:130] > [preflight] Running pre-flight checks
	I0103 19:20:18.076035  102835 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0103 19:20:18.076042  102835 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-gcp
	I0103 19:20:18.076048  102835 command_runner.go:130] > OS: Linux
	I0103 19:20:18.076056  102835 command_runner.go:130] > CGROUPS_CPU: enabled
	I0103 19:20:18.076072  102835 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0103 19:20:18.076081  102835 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0103 19:20:18.076089  102835 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0103 19:20:18.076095  102835 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0103 19:20:18.076100  102835 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0103 19:20:18.076107  102835 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0103 19:20:18.076113  102835 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0103 19:20:18.076118  102835 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0103 19:20:18.076124  102835 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0103 19:20:18.076140  102835 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0103 19:20:18.076150  102835 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 19:20:18.076159  102835 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 19:20:18.076166  102835 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0103 19:20:18.076175  102835 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0103 19:20:18.076182  102835 command_runner.go:130] > This node has joined the cluster:
	I0103 19:20:18.076190  102835 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0103 19:20:18.076198  102835 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0103 19:20:18.076208  102835 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0103 19:20:18.076227  102835 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ftz7ad.peqnrkxfemamf665 --discovery-token-ca-cert-hash sha256:6fada62843f74bbefe3d0de7ede254ffc49257c2300ab57b02a33590c9b388a1 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-867906-m02": (2.261777313s)
	I0103 19:20:18.076246  102835 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0103 19:20:18.235080  102835 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0103 19:20:18.235176  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a minikube.k8s.io/name=multinode-867906 minikube.k8s.io/updated_at=2024_01_03T19_20_18_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:20:18.305380  102835 command_runner.go:130] > node/multinode-867906-m02 labeled
	I0103 19:20:18.307797  102835 start.go:306] JoinCluster complete in 2.649549779s
	I0103 19:20:18.307824  102835 cni.go:84] Creating CNI manager for ""
	I0103 19:20:18.307831  102835 cni.go:136] 2 nodes found, recommending kindnet
	I0103 19:20:18.307882  102835 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0103 19:20:18.311351  102835 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0103 19:20:18.311375  102835 command_runner.go:130] >   Size: 4085020   	Blocks: 7992       IO Block: 4096   regular file
	I0103 19:20:18.311385  102835 command_runner.go:130] > Device: 34h/52d	Inode: 582508      Links: 1
	I0103 19:20:18.311394  102835 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0103 19:20:18.311403  102835 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I0103 19:20:18.311412  102835 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I0103 19:20:18.311428  102835 command_runner.go:130] > Change: 2024-01-03 18:59:22.202270685 +0000
	I0103 19:20:18.311440  102835 command_runner.go:130] >  Birth: 2024-01-03 18:59:22.178268890 +0000
	I0103 19:20:18.311508  102835 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0103 19:20:18.311519  102835 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0103 19:20:18.328327  102835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0103 19:20:18.547603  102835 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0103 19:20:18.547629  102835 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0103 19:20:18.547639  102835 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0103 19:20:18.547646  102835 command_runner.go:130] > daemonset.apps/kindnet configured
	I0103 19:20:18.547984  102835 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17885-8915/kubeconfig
	I0103 19:20:18.548213  102835 kapi.go:59] client config for multinode-867906: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/client.key", CAFile:"/home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:20:18.548522  102835 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0103 19:20:18.548589  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:18.548616  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:18.548630  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:18.550722  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:18.550740  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:18.550747  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:18 GMT
	I0103 19:20:18.550752  102835 round_trippers.go:580]     Audit-Id: a4adcdd0-48fc-4335-9ae7-604d9a7e2cb0
	I0103 19:20:18.550757  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:18.550762  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:18.550767  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:18.550773  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:18.550778  102835 round_trippers.go:580]     Content-Length: 291
	I0103 19:20:18.550801  102835 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c2d06393-ba6f-4103-beba-76fece3a20fb","resourceVersion":"409","creationTimestamp":"2024-01-03T19:19:15Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0103 19:20:18.550877  102835 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-867906" context rescaled to 1 replicas
	I0103 19:20:18.550902  102835 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0103 19:20:18.554665  102835 out.go:177] * Verifying Kubernetes components...
	I0103 19:20:18.556289  102835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:20:18.567719  102835 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17885-8915/kubeconfig
	I0103 19:20:18.568083  102835 kapi.go:59] client config for multinode-867906: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-8915/.minikube/profiles/multinode-867906/client.key", CAFile:"/home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 19:20:18.568378  102835 node_ready.go:35] waiting up to 6m0s for node "multinode-867906-m02" to be "Ready" ...
	I0103 19:20:18.568463  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:18.568473  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:18.568485  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:18.568498  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:18.570635  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:18.570655  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:18.570664  102835 round_trippers.go:580]     Audit-Id: 4901ec7b-e7ae-4d82-8c3f-e18cf471c2eb
	I0103 19:20:18.570673  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:18.570682  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:18.570690  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:18.570699  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:18.570707  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:18 GMT
	I0103 19:20:18.570843  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"449","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I0103 19:20:19.068543  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:19.068567  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:19.068575  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:19.068581  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:19.070971  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:19.070997  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:19.071007  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:19.071016  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:19.071034  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:19.071043  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:19 GMT
	I0103 19:20:19.071053  102835 round_trippers.go:580]     Audit-Id: 4f3ad86e-01cc-4f42-ae90-6e46c5909fe4
	I0103 19:20:19.071067  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:19.071156  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"451","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0103 19:20:19.568873  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:19.568895  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:19.568904  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:19.568909  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:19.571497  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:19.571516  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:19.571524  102835 round_trippers.go:580]     Audit-Id: 43f6dd45-464f-4c5f-97fa-e5e4b17d4bd0
	I0103 19:20:19.571530  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:19.571535  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:19.571540  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:19.571545  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:19.571550  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:19 GMT
	I0103 19:20:19.571661  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"451","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0103 19:20:20.069333  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:20.069358  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:20.069366  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:20.069371  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:20.071800  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:20.071828  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:20.071843  102835 round_trippers.go:580]     Audit-Id: 5f541212-553d-4003-9cca-f77035fe486b
	I0103 19:20:20.071852  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:20.071861  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:20.071869  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:20.071883  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:20.071893  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:20 GMT
	I0103 19:20:20.072021  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"451","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0103 19:20:20.568536  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:20.568560  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:20.568568  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:20.568573  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:20.570889  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:20.570913  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:20.570923  102835 round_trippers.go:580]     Audit-Id: bf9e6340-0301-4e90-b904-d101902c28f4
	I0103 19:20:20.570932  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:20.570941  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:20.570951  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:20.570959  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:20.570968  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:20 GMT
	I0103 19:20:20.571100  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"451","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0103 19:20:20.571451  102835 node_ready.go:58] node "multinode-867906-m02" has status "Ready":"False"
	I0103 19:20:21.068721  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:21.068745  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:21.068757  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:21.068763  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:21.071248  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:21.071266  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:21.071273  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:21 GMT
	I0103 19:20:21.071279  102835 round_trippers.go:580]     Audit-Id: b25427cf-0bf3-4266-b259-7e18461e4bd1
	I0103 19:20:21.071284  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:21.071289  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:21.071295  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:21.071300  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:21.071442  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"451","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0103 19:20:21.569045  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:21.569069  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:21.569076  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:21.569088  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:21.571410  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:21.571435  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:21.571444  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:21 GMT
	I0103 19:20:21.571451  102835 round_trippers.go:580]     Audit-Id: 08d91dc7-b4d9-407c-9653-c350eb464147
	I0103 19:20:21.571459  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:21.571466  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:21.571478  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:21.571488  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:21.571607  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"451","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0103 19:20:22.069288  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:22.069317  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:22.069325  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:22.069331  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:22.071632  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:22.071651  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:22.071657  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:22.071663  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:22 GMT
	I0103 19:20:22.071672  102835 round_trippers.go:580]     Audit-Id: 94eb874d-b8e1-44da-82b2-3171e7829603
	I0103 19:20:22.071681  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:22.071689  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:22.071697  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:22.071847  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"451","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0103 19:20:22.569451  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:22.569474  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:22.569482  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:22.569488  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:22.571684  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:22.571704  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:22.571711  102835 round_trippers.go:580]     Audit-Id: 695df0fa-6b71-43ae-8e9b-6cdd6301b3e3
	I0103 19:20:22.571717  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:22.571723  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:22.571731  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:22.571739  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:22.571746  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:22 GMT
	I0103 19:20:22.571855  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"451","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0103 19:20:22.572176  102835 node_ready.go:58] node "multinode-867906-m02" has status "Ready":"False"
	I0103 19:20:23.069516  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:23.069540  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:23.069553  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:23.069564  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:23.071888  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:23.071911  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:23.071918  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:23.071923  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:23.071929  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:23.071933  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:23.071939  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:23 GMT
	I0103 19:20:23.071943  102835 round_trippers.go:580]     Audit-Id: 01c6418c-6e89-492e-877d-cbac5bd3340e
	I0103 19:20:23.072031  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"451","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0103 19:20:23.568548  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:23.568572  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:23.568581  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:23.568587  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:23.571226  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:23.571245  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:23.571254  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:23.571260  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:23.571267  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:23.571277  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:23.571285  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:23 GMT
	I0103 19:20:23.571298  102835 round_trippers.go:580]     Audit-Id: 0c4cbc46-fd3f-476b-b887-7708c42d5972
	I0103 19:20:23.571439  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"451","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0103 19:20:24.069461  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:24.069486  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:24.069494  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:24.069500  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:24.071715  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:24.071736  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:24.071744  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:24 GMT
	I0103 19:20:24.071750  102835 round_trippers.go:580]     Audit-Id: 2930bb8d-0417-461a-84ee-b81beef7aa70
	I0103 19:20:24.071755  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:24.071761  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:24.071766  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:24.071774  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:24.071857  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"451","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0103 19:20:24.568519  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:24.568557  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:24.568566  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:24.568571  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:24.570938  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:24.570962  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:24.570970  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:24.570976  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:24.570981  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:24.570987  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:24.570996  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:24 GMT
	I0103 19:20:24.571005  102835 round_trippers.go:580]     Audit-Id: bc533a97-f664-41e9-8b41-1f9086162f3a
	I0103 19:20:24.571156  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"451","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0103 19:20:25.068754  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:25.068790  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:25.068801  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:25.068809  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:25.071268  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:25.071288  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:25.071295  102835 round_trippers.go:580]     Audit-Id: 4d5922c5-1604-4324-9e09-bace9d175c58
	I0103 19:20:25.071300  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:25.071305  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:25.071310  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:25.071315  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:25.071320  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:25 GMT
	I0103 19:20:25.071477  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"451","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0103 19:20:25.071854  102835 node_ready.go:58] node "multinode-867906-m02" has status "Ready":"False"
	I0103 19:20:25.569171  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:25.569194  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:25.569202  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:25.569208  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:25.571450  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:25.571475  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:25.571485  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:25.571493  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:25.571503  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:25.571511  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:25.571523  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:25 GMT
	I0103 19:20:25.571532  102835 round_trippers.go:580]     Audit-Id: 0d58c139-aade-44a9-b7bf-9b43497e451b
	I0103 19:20:25.571671  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"451","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0103 19:20:26.069368  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:26.069400  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:26.069410  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:26.069419  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:26.071857  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:26.071883  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:26.071894  102835 round_trippers.go:580]     Audit-Id: 20aac575-56b8-4183-b041-aa5e78f7760e
	I0103 19:20:26.071904  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:26.071912  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:26.071918  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:26.071924  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:26.071929  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:26 GMT
	I0103 19:20:26.072020  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"451","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0103 19:20:26.569324  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:26.569347  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:26.569354  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:26.569361  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:26.571695  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:26.571717  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:26.571724  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:26.571730  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:26.571735  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:26.571741  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:26.571747  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:26 GMT
	I0103 19:20:26.571752  102835 round_trippers.go:580]     Audit-Id: 06334a7c-42f6-4904-a6fe-06154528301d
	I0103 19:20:26.571853  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"451","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0103 19:20:27.069480  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:27.069512  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:27.069524  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:27.069534  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:27.071899  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:27.071923  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:27.071934  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:27.071942  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:27.071951  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:27.071959  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:27.071967  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:27 GMT
	I0103 19:20:27.071982  102835 round_trippers.go:580]     Audit-Id: 7a73e2b5-63d5-499d-8683-c745779262ee
	I0103 19:20:27.072092  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"451","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0103 19:20:27.072409  102835 node_ready.go:58] node "multinode-867906-m02" has status "Ready":"False"
	I0103 19:20:27.568653  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:27.568678  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:27.568689  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:27.568697  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:27.570926  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:27.570977  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:27.570989  102835 round_trippers.go:580]     Audit-Id: 37d3db57-4d5d-43bd-b2ac-c7ef3d3f2f9e
	I0103 19:20:27.570998  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:27.571008  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:27.571026  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:27.571039  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:27.571051  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:27 GMT
	I0103 19:20:27.571166  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"451","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5762 chars]
	I0103 19:20:28.068632  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:28.068658  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:28.068666  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:28.068672  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:28.071243  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:28.071267  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:28.071276  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:28.071283  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:28 GMT
	I0103 19:20:28.071291  102835 round_trippers.go:580]     Audit-Id: 58bfa877-bbe7-4f8f-8b9a-bb0ab789346d
	I0103 19:20:28.071299  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:28.071308  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:28.071321  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:28.071432  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:28.568876  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:28.568900  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:28.568908  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:28.568914  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:28.583088  102835 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0103 19:20:28.583120  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:28.583130  102835 round_trippers.go:580]     Audit-Id: e9998422-c35c-4197-870d-483ab980eaaf
	I0103 19:20:28.583138  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:28.583145  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:28.583153  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:28.583162  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:28.583174  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:28 GMT
	I0103 19:20:28.583320  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:29.068548  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:29.068571  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:29.068579  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:29.068586  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:29.070824  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:29.070862  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:29.070872  102835 round_trippers.go:580]     Audit-Id: cdb07f9e-a2ee-46e9-b5b4-2d7e510b5fe5
	I0103 19:20:29.070884  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:29.070892  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:29.070900  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:29.070908  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:29.070917  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:29 GMT
	I0103 19:20:29.071073  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:29.569003  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:29.569024  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:29.569032  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:29.569038  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:29.571525  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:29.571548  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:29.571558  102835 round_trippers.go:580]     Audit-Id: 070afe50-7030-4ae2-a47b-acb71d106fe1
	I0103 19:20:29.571565  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:29.571572  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:29.571580  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:29.571589  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:29.571599  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:29 GMT
	I0103 19:20:29.571711  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:29.572055  102835 node_ready.go:58] node "multinode-867906-m02" has status "Ready":"False"
	I0103 19:20:30.069280  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:30.069302  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:30.069310  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:30.069317  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:30.071749  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:30.071778  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:30.071786  102835 round_trippers.go:580]     Audit-Id: a3e47ba3-961b-4713-a752-aaad91eef84c
	I0103 19:20:30.071797  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:30.071803  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:30.071808  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:30.071815  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:30.071825  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:30 GMT
	I0103 19:20:30.071967  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:30.568600  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:30.568622  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:30.568630  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:30.568636  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:30.570839  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:30.570858  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:30.570865  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:30.570871  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:30.570876  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:30.570881  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:30.570888  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:30 GMT
	I0103 19:20:30.570896  102835 round_trippers.go:580]     Audit-Id: c09a2f3c-dbaa-4f8e-afed-1fe3a863befb
	I0103 19:20:30.571019  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:31.069570  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:31.069593  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:31.069601  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:31.069612  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:31.071983  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:31.072008  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:31.072018  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:31.072026  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:31.072033  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:31.072041  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:31 GMT
	I0103 19:20:31.072049  102835 round_trippers.go:580]     Audit-Id: 7717c8c3-ccab-48d9-a092-5f8ff58f60ac
	I0103 19:20:31.072058  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:31.072170  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:31.568640  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:31.568662  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:31.568669  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:31.568675  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:31.571014  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:31.571040  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:31.571050  102835 round_trippers.go:580]     Audit-Id: 8d916baa-3b8f-400b-be47-291e1548fd40
	I0103 19:20:31.571060  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:31.571068  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:31.571077  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:31.571085  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:31.571094  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:31 GMT
	I0103 19:20:31.571228  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:32.068768  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:32.068790  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:32.068798  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:32.068804  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:32.071366  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:32.071385  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:32.071393  102835 round_trippers.go:580]     Audit-Id: bac112ba-8497-421b-aae1-b5e73d73d648
	I0103 19:20:32.071399  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:32.071407  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:32.071416  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:32.071425  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:32.071436  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:32 GMT
	I0103 19:20:32.071541  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:32.071927  102835 node_ready.go:58] node "multinode-867906-m02" has status "Ready":"False"
	I0103 19:20:32.569171  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:32.569211  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:32.569219  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:32.569224  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:32.571522  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:32.571540  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:32.571549  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:32.571568  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:32 GMT
	I0103 19:20:32.571578  102835 round_trippers.go:580]     Audit-Id: 3db8d71e-0d3e-4faf-9c03-205b5642fad5
	I0103 19:20:32.571585  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:32.571591  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:32.571604  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:32.571721  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:33.069383  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:33.069412  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:33.069422  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:33.069431  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:33.071707  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:33.071725  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:33.071732  102835 round_trippers.go:580]     Audit-Id: 8321549e-da41-4cdf-ae8d-67f5da95ffc5
	I0103 19:20:33.071738  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:33.071742  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:33.071749  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:33.071758  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:33.071767  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:33 GMT
	I0103 19:20:33.071880  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:33.569491  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:33.569515  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:33.569526  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:33.569534  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:33.571666  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:33.571688  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:33.571697  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:33.571705  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:33.571713  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:33.571722  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:33 GMT
	I0103 19:20:33.571735  102835 round_trippers.go:580]     Audit-Id: 27043c21-553f-4537-8610-85ae6e2d951f
	I0103 19:20:33.571747  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:33.571869  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:34.068924  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:34.068949  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:34.068962  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:34.068970  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:34.071280  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:34.071306  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:34.071315  102835 round_trippers.go:580]     Audit-Id: 42b53997-6e93-4d1a-89ac-313765eae265
	I0103 19:20:34.071323  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:34.071330  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:34.071336  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:34.071344  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:34.071353  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:34 GMT
	I0103 19:20:34.071483  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:34.569026  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:34.569052  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:34.569071  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:34.569079  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:34.571557  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:34.571585  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:34.571597  102835 round_trippers.go:580]     Audit-Id: 22464b3e-c5e4-4017-ab70-4a8cb022bb24
	I0103 19:20:34.571605  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:34.571614  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:34.571622  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:34.571630  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:34.571638  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:34 GMT
	I0103 19:20:34.571784  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:34.572100  102835 node_ready.go:58] node "multinode-867906-m02" has status "Ready":"False"
	I0103 19:20:35.069494  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:35.069521  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:35.069533  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:35.069543  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:35.071970  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:35.071996  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:35.072007  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:35.072016  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:35.072027  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:35 GMT
	I0103 19:20:35.072036  102835 round_trippers.go:580]     Audit-Id: 086e37d4-0b53-4a23-b425-a56654a91922
	I0103 19:20:35.072049  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:35.072054  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:35.072152  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:35.568760  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:35.568786  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:35.568796  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:35.568805  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:35.571188  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:35.571218  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:35.571229  102835 round_trippers.go:580]     Audit-Id: 66a51c7f-c059-4abd-8b18-f3e4d327d362
	I0103 19:20:35.571238  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:35.571246  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:35.571256  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:35.571264  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:35.571277  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:35 GMT
	I0103 19:20:35.571412  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:36.068868  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:36.068890  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:36.068898  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:36.068903  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:36.071257  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:36.071279  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:36.071286  102835 round_trippers.go:580]     Audit-Id: 43f77a76-e96b-4933-9829-2902ff6b5a8f
	I0103 19:20:36.071292  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:36.071300  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:36.071308  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:36.071323  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:36.071333  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:36 GMT
	I0103 19:20:36.071583  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:36.569246  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:36.569269  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:36.569276  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:36.569282  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:36.571583  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:36.571606  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:36.571616  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:36.571623  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:36.571631  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:36.571639  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:36 GMT
	I0103 19:20:36.571647  102835 round_trippers.go:580]     Audit-Id: 52aea1e2-333b-4cb6-8aa6-658c9282f164
	I0103 19:20:36.571654  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:36.571788  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:37.069392  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:37.069412  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:37.069420  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:37.069426  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:37.071675  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:37.071693  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:37.071700  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:37.071705  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:37.071711  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:37.071716  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:37 GMT
	I0103 19:20:37.071723  102835 round_trippers.go:580]     Audit-Id: 1790e3b1-e4c4-44f0-8b01-c72c145a825e
	I0103 19:20:37.071731  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:37.071882  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:37.072215  102835 node_ready.go:58] node "multinode-867906-m02" has status "Ready":"False"
	I0103 19:20:37.569599  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:37.569623  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:37.569631  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:37.569637  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:37.571765  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:37.571791  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:37.571806  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:37.571815  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:37.571823  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:37.571829  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:37.571835  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:37 GMT
	I0103 19:20:37.571846  102835 round_trippers.go:580]     Audit-Id: 7fd51337-206e-42e9-8634-2dde4c12739c
	I0103 19:20:37.571976  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:38.069222  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:38.069243  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:38.069251  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:38.069258  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:38.071527  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:38.071550  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:38.071557  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:38.071562  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:38.071567  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:38.071573  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:38.071579  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:38 GMT
	I0103 19:20:38.071589  102835 round_trippers.go:580]     Audit-Id: 6f1e1f83-027a-4171-9819-e6b87bd972a1
	I0103 19:20:38.071716  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:38.569415  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:38.569438  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:38.569446  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:38.569458  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:38.571767  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:38.571790  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:38.571796  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:38.571802  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:38.571807  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:38 GMT
	I0103 19:20:38.571812  102835 round_trippers.go:580]     Audit-Id: 68d98712-bd3a-4ee2-a867-a5efc2c77edc
	I0103 19:20:38.571817  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:38.571822  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:38.571932  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:39.069584  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:39.069604  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:39.069612  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:39.069618  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:39.071952  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:39.071976  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:39.071986  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:39 GMT
	I0103 19:20:39.071994  102835 round_trippers.go:580]     Audit-Id: 3d5aa1ba-f2c0-414f-b2c4-b1606924322b
	I0103 19:20:39.072004  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:39.072013  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:39.072025  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:39.072038  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:39.072228  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:39.072526  102835 node_ready.go:58] node "multinode-867906-m02" has status "Ready":"False"
	I0103 19:20:39.568959  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:39.568981  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:39.568988  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:39.568995  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:39.571459  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:39.571479  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:39.571486  102835 round_trippers.go:580]     Audit-Id: c7974a60-908b-4658-b999-bc79012e800c
	I0103 19:20:39.571491  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:39.571497  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:39.571502  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:39.571508  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:39.571514  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:39 GMT
	I0103 19:20:39.571663  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:40.069371  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:40.069395  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:40.069403  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:40.069409  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:40.071741  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:40.071759  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:40.071766  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:40.071771  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:40.071777  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:40.071782  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:40 GMT
	I0103 19:20:40.071789  102835 round_trippers.go:580]     Audit-Id: 4f707a1d-d29c-4c92-912a-84c1b4df0775
	I0103 19:20:40.071797  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:40.072008  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:40.568531  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:40.568553  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:40.568561  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:40.568567  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:40.570884  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:40.570905  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:40.570915  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:40.570924  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:40.570933  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:40.570942  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:40.570953  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:40 GMT
	I0103 19:20:40.570958  102835 round_trippers.go:580]     Audit-Id: 5d9c0765-7b4a-4125-8bd4-d306ac1a06a3
	I0103 19:20:40.571061  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:41.068593  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:41.068618  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:41.068626  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:41.068634  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:41.071053  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:41.071071  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:41.071078  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:41 GMT
	I0103 19:20:41.071083  102835 round_trippers.go:580]     Audit-Id: 83d1bd3e-997e-4710-bbce-e54c8796f96b
	I0103 19:20:41.071088  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:41.071094  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:41.071099  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:41.071110  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:41.071230  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:41.569513  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:41.569534  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:41.569542  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:41.569548  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:41.571781  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:41.571808  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:41.571819  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:41.571829  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:41.571838  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:41 GMT
	I0103 19:20:41.571847  102835 round_trippers.go:580]     Audit-Id: 516296fc-d363-46fb-a313-ff02b365a224
	I0103 19:20:41.571854  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:41.571862  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:41.572008  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:41.572419  102835 node_ready.go:58] node "multinode-867906-m02" has status "Ready":"False"
	I0103 19:20:42.068549  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:42.068570  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:42.068583  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:42.068589  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:42.071010  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:42.071031  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:42.071040  102835 round_trippers.go:580]     Audit-Id: 0bf6ca9a-284f-4445-a121-0a14c5616c47
	I0103 19:20:42.071051  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:42.071060  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:42.071071  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:42.071084  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:42.071095  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:42 GMT
	I0103 19:20:42.071248  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:42.568659  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:42.568685  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:42.568696  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:42.568704  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:42.571251  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:42.571281  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:42.571291  102835 round_trippers.go:580]     Audit-Id: 5cd62004-c84e-41a2-b75d-795e85c70b66
	I0103 19:20:42.571297  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:42.571302  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:42.571308  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:42.571314  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:42.571320  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:42 GMT
	I0103 19:20:42.571511  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:43.069197  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:43.069228  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:43.069239  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:43.069246  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:43.071613  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:43.071635  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:43.071642  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:43.071648  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:43.071653  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:43.071658  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:43 GMT
	I0103 19:20:43.071665  102835 round_trippers.go:580]     Audit-Id: b8e96ed0-be5e-4ec0-abe9-2e004142aacb
	I0103 19:20:43.071672  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:43.071834  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:43.569468  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:43.569490  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:43.569498  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:43.569504  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:43.571912  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:43.571938  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:43.571949  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:43.571959  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:43.571971  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:43 GMT
	I0103 19:20:43.571981  102835 round_trippers.go:580]     Audit-Id: e06c903a-c41c-4e4c-ad45-b2ca5727edb7
	I0103 19:20:43.571996  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:43.572005  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:43.572164  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:43.572460  102835 node_ready.go:58] node "multinode-867906-m02" has status "Ready":"False"
	I0103 19:20:44.069144  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:44.069165  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:44.069173  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:44.069179  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:44.071766  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:44.071786  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:44.071796  102835 round_trippers.go:580]     Audit-Id: 8c67c19d-e255-4134-8238-a936e9fa2434
	I0103 19:20:44.071803  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:44.071810  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:44.071817  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:44.071824  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:44.071831  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:44 GMT
	I0103 19:20:44.071976  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:44.568549  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:44.568573  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:44.568581  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:44.568590  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:44.570952  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:44.570973  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:44.570983  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:44.570993  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:44 GMT
	I0103 19:20:44.571002  102835 round_trippers.go:580]     Audit-Id: 96f6f5d5-6df5-489c-a8b9-71f19cafb1b5
	I0103 19:20:44.571011  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:44.571023  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:44.571031  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:44.571199  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:45.068694  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:45.068725  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:45.068734  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:45.068739  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:45.071582  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:45.071611  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:45.071622  102835 round_trippers.go:580]     Audit-Id: 96ccf705-717f-4e39-84c7-377baf56efc5
	I0103 19:20:45.071632  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:45.071642  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:45.071650  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:45.071661  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:45.071666  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:45 GMT
	I0103 19:20:45.071776  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:45.569376  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:45.569403  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:45.569414  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:45.569423  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:45.572087  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:45.572113  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:45.572129  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:45.572137  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:45.572148  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:45 GMT
	I0103 19:20:45.572161  102835 round_trippers.go:580]     Audit-Id: 62a109fb-7dcd-473c-b9a5-4596008fdf7a
	I0103 19:20:45.572168  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:45.572177  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:45.572370  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:45.572682  102835 node_ready.go:58] node "multinode-867906-m02" has status "Ready":"False"
	I0103 19:20:46.068968  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:46.068993  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:46.069004  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:46.069017  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:46.071408  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:46.071436  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:46.071447  102835 round_trippers.go:580]     Audit-Id: e77e71fe-aed6-4901-8247-45803c9812aa
	I0103 19:20:46.071456  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:46.071474  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:46.071480  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:46.071488  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:46.071497  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:46 GMT
	I0103 19:20:46.071625  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:46.569221  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:46.569243  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:46.569251  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:46.569257  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:46.571576  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:46.571601  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:46.571612  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:46.571621  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:46.571630  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:46.571639  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:46 GMT
	I0103 19:20:46.571647  102835 round_trippers.go:580]     Audit-Id: b3c4a6ac-79fc-467e-98f8-e796da3f20f4
	I0103 19:20:46.571656  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:46.571786  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:47.069321  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:47.069346  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:47.069354  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:47.069360  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:47.071585  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:47.071607  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:47.071615  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:47 GMT
	I0103 19:20:47.071622  102835 round_trippers.go:580]     Audit-Id: 671ee38e-c49d-4006-af33-ec740e885215
	I0103 19:20:47.071631  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:47.071639  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:47.071647  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:47.071656  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:47.071793  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:47.569420  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:47.569445  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:47.569452  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:47.569459  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:47.571884  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:47.571909  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:47.571919  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:47.571927  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:47.571935  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:47.571943  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:47 GMT
	I0103 19:20:47.571951  102835 round_trippers.go:580]     Audit-Id: 0ecd061e-3041-4585-b9c3-4d82cb5fecef
	I0103 19:20:47.571961  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:47.572135  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:48.068671  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:48.068699  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:48.068710  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:48.068717  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:48.070811  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:48.070837  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:48.070847  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:48.070856  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:48.070864  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:48.070872  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:48.070884  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:48 GMT
	I0103 19:20:48.070892  102835 round_trippers.go:580]     Audit-Id: 4ff7c9a9-1469-415e-ac8a-7530783233df
	I0103 19:20:48.071104  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:48.071436  102835 node_ready.go:58] node "multinode-867906-m02" has status "Ready":"False"
	I0103 19:20:48.568629  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:48.568654  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:48.568662  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:48.568668  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:48.571124  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:48.571149  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:48.571160  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:48 GMT
	I0103 19:20:48.571168  102835 round_trippers.go:580]     Audit-Id: 6db9b179-8e38-4dc1-b10c-9fbf965a21ef
	I0103 19:20:48.571175  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:48.571183  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:48.571191  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:48.571201  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:48.571323  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:49.068739  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:49.068760  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:49.068768  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:49.068775  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:49.071112  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:49.071131  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:49.071138  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:49.071144  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:49 GMT
	I0103 19:20:49.071149  102835 round_trippers.go:580]     Audit-Id: 02f29d42-192b-40b6-90a2-eff0ba4b6370
	I0103 19:20:49.071157  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:49.071165  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:49.071175  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:49.071284  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"472","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6031 chars]
	I0103 19:20:49.569028  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:49.569052  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:49.569060  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:49.569066  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:49.571315  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:49.571333  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:49.571340  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:49.571345  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:49.571350  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:49.571355  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:49 GMT
	I0103 19:20:49.571360  102835 round_trippers.go:580]     Audit-Id: 134f35ec-b375-44aa-b112-d30a6a2edc5e
	I0103 19:20:49.571365  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:49.571472  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"493","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5848 chars]
	I0103 19:20:49.571814  102835 node_ready.go:49] node "multinode-867906-m02" has status "Ready":"True"
	I0103 19:20:49.571833  102835 node_ready.go:38] duration metric: took 31.00343782s waiting for node "multinode-867906-m02" to be "Ready" ...
	I0103 19:20:49.571842  102835 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:20:49.571907  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0103 19:20:49.571917  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:49.571924  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:49.571930  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:49.574866  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:49.574883  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:49.574894  102835 round_trippers.go:580]     Audit-Id: 872e3afb-de7e-4d51-844d-832f11879a7d
	I0103 19:20:49.574899  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:49.574905  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:49.574911  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:49.574920  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:49.574928  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:49 GMT
	I0103 19:20:49.575396  102835 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"494"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qb6ll","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a10d6003-2e28-4c8f-a743-87a3a9e768be","resourceVersion":"405","creationTimestamp":"2024-01-03T19:19:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ffd7e003-2e2d-44e7-8e89-948ccc4a0c83","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:19:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffd7e003-2e2d-44e7-8e89-948ccc4a0c83\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I0103 19:20:49.578058  102835 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qb6ll" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:49.578166  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qb6ll
	I0103 19:20:49.578177  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:49.578188  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:49.578200  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:49.580040  102835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:20:49.580054  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:49.580060  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:49.580066  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:49.580071  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:49.580076  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:49.580088  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:49 GMT
	I0103 19:20:49.580093  102835 round_trippers.go:580]     Audit-Id: 9b686935-989a-4291-aef6-268ee0bb844e
	I0103 19:20:49.580167  102835 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qb6ll","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a10d6003-2e28-4c8f-a743-87a3a9e768be","resourceVersion":"405","creationTimestamp":"2024-01-03T19:19:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ffd7e003-2e2d-44e7-8e89-948ccc4a0c83","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:19:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffd7e003-2e2d-44e7-8e89-948ccc4a0c83\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0103 19:20:49.580536  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:20:49.580549  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:49.580556  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:49.580562  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:49.582276  102835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:20:49.582296  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:49.582304  102835 round_trippers.go:580]     Audit-Id: 1dc9b607-0d09-4801-a389-fba0dcc7c204
	I0103 19:20:49.582310  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:49.582316  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:49.582321  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:49.582327  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:49.582335  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:49 GMT
	I0103 19:20:49.582437  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"385","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0103 19:20:49.582796  102835 pod_ready.go:92] pod "coredns-5dd5756b68-qb6ll" in "kube-system" namespace has status "Ready":"True"
	I0103 19:20:49.582816  102835 pod_ready.go:81] duration metric: took 4.73445ms waiting for pod "coredns-5dd5756b68-qb6ll" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:49.582828  102835 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-867906" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:49.582885  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-867906
	I0103 19:20:49.582931  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:49.582945  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:49.582954  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:49.584536  102835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:20:49.584555  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:49.584565  102835 round_trippers.go:580]     Audit-Id: 49339b20-8b73-45d4-b011-4a4bd06dbe16
	I0103 19:20:49.584585  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:49.584595  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:49.584604  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:49.584617  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:49.584625  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:49 GMT
	I0103 19:20:49.584737  102835 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-867906","namespace":"kube-system","uid":"e218d02e-1660-479e-91d7-9a25bce7cbc1","resourceVersion":"277","creationTimestamp":"2024-01-03T19:19:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"096508eeb789ebd52eb384a7c8522295","kubernetes.io/config.mirror":"096508eeb789ebd52eb384a7c8522295","kubernetes.io/config.seen":"2024-01-03T19:19:15.888143739Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:19:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0103 19:20:49.585091  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:20:49.585104  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:49.585111  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:49.585117  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:49.586750  102835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:20:49.586766  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:49.586773  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:49.586778  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:49.586783  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:49 GMT
	I0103 19:20:49.586789  102835 round_trippers.go:580]     Audit-Id: 0ad53000-8a1a-4ba2-8021-398e8c6a287b
	I0103 19:20:49.586795  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:49.586803  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:49.586979  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"385","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0103 19:20:49.587260  102835 pod_ready.go:92] pod "etcd-multinode-867906" in "kube-system" namespace has status "Ready":"True"
	I0103 19:20:49.587274  102835 pod_ready.go:81] duration metric: took 4.436953ms waiting for pod "etcd-multinode-867906" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:49.587286  102835 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-867906" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:49.587332  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-867906
	I0103 19:20:49.587339  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:49.587345  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:49.587351  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:49.589074  102835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:20:49.589091  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:49.589097  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:49.589102  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:49.589107  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:49.589112  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:49.589118  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:49 GMT
	I0103 19:20:49.589125  102835 round_trippers.go:580]     Audit-Id: c9288946-6cfa-41ca-a4b4-a6454bbcc27b
	I0103 19:20:49.589267  102835 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-867906","namespace":"kube-system","uid":"1f53d173-6053-4eae-aaa9-8ffcb1c17634","resourceVersion":"260","creationTimestamp":"2024-01-03T19:19:14Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"a0b48c6d0d511ddb918d1ee65203574b","kubernetes.io/config.mirror":"a0b48c6d0d511ddb918d1ee65203574b","kubernetes.io/config.seen":"2024-01-03T19:19:10.132179438Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:19:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0103 19:20:49.589650  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:20:49.589663  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:49.589670  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:49.589677  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:49.591190  102835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:20:49.591206  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:49.591215  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:49.591223  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:49.591232  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:49.591241  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:49 GMT
	I0103 19:20:49.591249  102835 round_trippers.go:580]     Audit-Id: e3e7c1bc-354d-4c2f-ab02-b3d767e73da3
	I0103 19:20:49.591268  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:49.591352  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"385","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0103 19:20:49.591758  102835 pod_ready.go:92] pod "kube-apiserver-multinode-867906" in "kube-system" namespace has status "Ready":"True"
	I0103 19:20:49.591777  102835 pod_ready.go:81] duration metric: took 4.476988ms waiting for pod "kube-apiserver-multinode-867906" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:49.591789  102835 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-867906" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:49.591848  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-867906
	I0103 19:20:49.591857  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:49.591867  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:49.591881  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:49.593560  102835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:20:49.593578  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:49.593588  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:49.593597  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:49.593606  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:49.593616  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:49 GMT
	I0103 19:20:49.593621  102835 round_trippers.go:580]     Audit-Id: 4fb6851c-7712-400d-ba5c-65d7af02777c
	I0103 19:20:49.593627  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:49.593745  102835 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-867906","namespace":"kube-system","uid":"528f1b6f-da53-4e14-87dc-90af9b16865b","resourceVersion":"256","creationTimestamp":"2024-01-03T19:19:16Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2c9e1aa27124c3c4642e5059650a8424","kubernetes.io/config.mirror":"2c9e1aa27124c3c4642e5059650a8424","kubernetes.io/config.seen":"2024-01-03T19:19:15.888157675Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:19:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0103 19:20:49.594233  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:20:49.594249  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:49.594260  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:49.594269  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:49.595778  102835 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 19:20:49.595791  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:49.595797  102835 round_trippers.go:580]     Audit-Id: 745ea59d-c295-41d2-880d-07e4a71f3a6a
	I0103 19:20:49.595802  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:49.595807  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:49.595812  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:49.595817  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:49.595823  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:49 GMT
	I0103 19:20:49.595937  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"385","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0103 19:20:49.596202  102835 pod_ready.go:92] pod "kube-controller-manager-multinode-867906" in "kube-system" namespace has status "Ready":"True"
	I0103 19:20:49.596216  102835 pod_ready.go:81] duration metric: took 4.415912ms waiting for pod "kube-controller-manager-multinode-867906" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:49.596223  102835 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d5vmq" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:49.769651  102835 request.go:629] Waited for 173.353236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d5vmq
	I0103 19:20:49.769709  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d5vmq
	I0103 19:20:49.769715  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:49.769722  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:49.769728  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:49.772016  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:49.772036  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:49.772045  102835 round_trippers.go:580]     Audit-Id: 1be01d25-6aa5-4939-a97a-90267ccb587d
	I0103 19:20:49.772053  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:49.772067  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:49.772075  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:49.772081  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:49.772086  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:49 GMT
	I0103 19:20:49.772195  102835 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-d5vmq","generateName":"kube-proxy-","namespace":"kube-system","uid":"d85c88ab-4ffa-4962-be3a-077f88524125","resourceVersion":"461","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4bb84658-ab5f-48b7-bb1e-58fdc441b4c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4bb84658-ab5f-48b7-bb1e-58fdc441b4c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0103 19:20:49.969944  102835 request.go:629] Waited for 197.355818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:49.970024  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906-m02
	I0103 19:20:49.970033  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:49.970044  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:49.970053  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:49.972554  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:49.972574  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:49.972585  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:49.972592  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:49 GMT
	I0103 19:20:49.972597  102835 round_trippers.go:580]     Audit-Id: 5b3d5d2c-fa63-4a09-a4b1-36199ceaf2fe
	I0103 19:20:49.972602  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:49.972608  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:49.972613  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:49.972729  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906-m02","uid":"00653516-31bb-4e25-8a3e-f54d9a866cc2","resourceVersion":"493","creationTimestamp":"2024-01-03T19:20:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T19_20_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5848 chars]
	I0103 19:20:49.973053  102835 pod_ready.go:92] pod "kube-proxy-d5vmq" in "kube-system" namespace has status "Ready":"True"
	I0103 19:20:49.973069  102835 pod_ready.go:81] duration metric: took 376.841165ms waiting for pod "kube-proxy-d5vmq" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:49.973078  102835 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nrm8b" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:50.169074  102835 request.go:629] Waited for 195.931264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrm8b
	I0103 19:20:50.169144  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrm8b
	I0103 19:20:50.169149  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:50.169157  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:50.169166  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:50.171594  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:50.171616  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:50.171627  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:50.171636  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:50 GMT
	I0103 19:20:50.171645  102835 round_trippers.go:580]     Audit-Id: 71dadaea-7731-4181-8338-2abd35e8460d
	I0103 19:20:50.171655  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:50.171660  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:50.171665  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:50.171756  102835 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nrm8b","generateName":"kube-proxy-","namespace":"kube-system","uid":"025f5c46-e360-423d-9c4f-eee8af0472ae","resourceVersion":"372","creationTimestamp":"2024-01-03T19:19:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4bb84658-ab5f-48b7-bb1e-58fdc441b4c5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:19:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4bb84658-ab5f-48b7-bb1e-58fdc441b4c5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0103 19:20:50.369662  102835 request.go:629] Waited for 197.378363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:20:50.369723  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:20:50.369727  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:50.369735  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:50.369740  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:50.372024  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:50.372048  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:50.372057  102835 round_trippers.go:580]     Audit-Id: d51f7c1e-b276-421e-bae8-6fb8af2f5c61
	I0103 19:20:50.372066  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:50.372073  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:50.372093  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:50.372105  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:50.372117  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:50 GMT
	I0103 19:20:50.372225  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"385","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0103 19:20:50.372578  102835 pod_ready.go:92] pod "kube-proxy-nrm8b" in "kube-system" namespace has status "Ready":"True"
	I0103 19:20:50.372594  102835 pod_ready.go:81] duration metric: took 399.510942ms waiting for pod "kube-proxy-nrm8b" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:50.372603  102835 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-867906" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:50.569601  102835 request.go:629] Waited for 196.93067ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-867906
	I0103 19:20:50.569675  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-867906
	I0103 19:20:50.569680  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:50.569688  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:50.569698  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:50.572088  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:50.572110  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:50.572119  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:50.572124  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:50.572134  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:50.572140  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:50 GMT
	I0103 19:20:50.572146  102835 round_trippers.go:580]     Audit-Id: 6ea60b68-d271-41d8-9bb2-f5f14d31293b
	I0103 19:20:50.572152  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:50.572317  102835 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-867906","namespace":"kube-system","uid":"2a794cae-9d56-476c-9b6b-51742cdf9118","resourceVersion":"258","creationTimestamp":"2024-01-03T19:19:16Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6888e32d69fb1b48c672bb546f324150","kubernetes.io/config.mirror":"6888e32d69fb1b48c672bb546f324150","kubernetes.io/config.seen":"2024-01-03T19:19:15.888158968Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T19:19:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0103 19:20:50.769873  102835 request.go:629] Waited for 197.191726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:20:50.769988  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-867906
	I0103 19:20:50.770000  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:50.770012  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:50.770024  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:50.772227  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:50.772248  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:50.772258  102835 round_trippers.go:580]     Audit-Id: 7b573e20-3449-4b0d-a22b-6937a5330e6f
	I0103 19:20:50.772266  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:50.772274  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:50.772280  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:50.772287  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:50.772295  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:50 GMT
	I0103 19:20:50.772402  102835 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"385","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T19:19:13Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0103 19:20:50.772691  102835 pod_ready.go:92] pod "kube-scheduler-multinode-867906" in "kube-system" namespace has status "Ready":"True"
	I0103 19:20:50.772706  102835 pod_ready.go:81] duration metric: took 400.08944ms waiting for pod "kube-scheduler-multinode-867906" in "kube-system" namespace to be "Ready" ...
	I0103 19:20:50.772716  102835 pod_ready.go:38] duration metric: took 1.200866394s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:20:50.772737  102835 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 19:20:50.772779  102835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:20:50.783259  102835 system_svc.go:56] duration metric: took 10.516235ms WaitForService to wait for kubelet.
	I0103 19:20:50.783279  102835 kubeadm.go:581] duration metric: took 32.232356024s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 19:20:50.783296  102835 node_conditions.go:102] verifying NodePressure condition ...
	I0103 19:20:50.969712  102835 request.go:629] Waited for 186.344404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0103 19:20:50.969782  102835 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0103 19:20:50.969787  102835 round_trippers.go:469] Request Headers:
	I0103 19:20:50.969794  102835 round_trippers.go:473]     Accept: application/json, */*
	I0103 19:20:50.969800  102835 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0103 19:20:50.972453  102835 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 19:20:50.972476  102835 round_trippers.go:577] Response Headers:
	I0103 19:20:50.972483  102835 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 19:20:50.972489  102835 round_trippers.go:580]     Content-Type: application/json
	I0103 19:20:50.972494  102835 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c81176b1-b5f1-489d-baa3-35fe76c45731
	I0103 19:20:50.972499  102835 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cbc30b2-bb90-4f63-881f-afefc3141a76
	I0103 19:20:50.972504  102835 round_trippers.go:580]     Date: Wed, 03 Jan 2024 19:20:50 GMT
	I0103 19:20:50.972509  102835 round_trippers.go:580]     Audit-Id: 437d11e8-cfe0-47c4-bf27-ef235c5d4373
	I0103 19:20:50.972669  102835 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"494"},"items":[{"metadata":{"name":"multinode-867906","uid":"bf159edf-6903-41f9-bb42-fdd558361f8f","resourceVersion":"385","creationTimestamp":"2024-01-03T19:19:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-867906","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-867906","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T19_19_16_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12840 chars]
	I0103 19:20:50.973147  102835 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0103 19:20:50.973162  102835 node_conditions.go:123] node cpu capacity is 8
	I0103 19:20:50.973170  102835 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0103 19:20:50.973173  102835 node_conditions.go:123] node cpu capacity is 8
	I0103 19:20:50.973177  102835 node_conditions.go:105] duration metric: took 189.877633ms to run NodePressure ...
	I0103 19:20:50.973188  102835 start.go:228] waiting for startup goroutines ...
	I0103 19:20:50.973214  102835 start.go:242] writing updated cluster config ...
	I0103 19:20:50.973472  102835 ssh_runner.go:195] Run: rm -f paused
	I0103 19:20:51.020294  102835 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0103 19:20:51.023414  102835 out.go:177] * Done! kubectl is now configured to use "multinode-867906" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 03 19:20:00 multinode-867906 crio[958]: time="2024-01-03 19:20:00.987863220Z" level=info msg="Starting container: 49a50d824c0ce3dd6f2f1be58cfc2c90882c84c7493cab10792a39e49872ea10" id=3cf18f12-7326-46dd-ae3f-f0990cf3885e name=/runtime.v1.RuntimeService/StartContainer
	Jan 03 19:20:00 multinode-867906 crio[958]: time="2024-01-03 19:20:00.992671130Z" level=info msg="Created container 835395ac2f2c3fcfad6cd730aac082ac749186f23ecec252543d79de378e419c: kube-system/storage-provisioner/storage-provisioner" id=f26cdcc3-df91-41ea-a160-ca1650d58b9d name=/runtime.v1.RuntimeService/CreateContainer
	Jan 03 19:20:00 multinode-867906 crio[958]: time="2024-01-03 19:20:00.993189453Z" level=info msg="Starting container: 835395ac2f2c3fcfad6cd730aac082ac749186f23ecec252543d79de378e419c" id=52b1f253-6939-4309-bf66-c27e44c713ee name=/runtime.v1.RuntimeService/StartContainer
	Jan 03 19:20:00 multinode-867906 crio[958]: time="2024-01-03 19:20:00.994680531Z" level=info msg="Started container" PID=2333 containerID=49a50d824c0ce3dd6f2f1be58cfc2c90882c84c7493cab10792a39e49872ea10 description=kube-system/coredns-5dd5756b68-qb6ll/coredns id=3cf18f12-7326-46dd-ae3f-f0990cf3885e name=/runtime.v1.RuntimeService/StartContainer sandboxID=dc35bbb3a2c804367b3638dc959650a3595ab097333212010ef955bbf5dfd5c1
	Jan 03 19:20:01 multinode-867906 crio[958]: time="2024-01-03 19:20:01.001222928Z" level=info msg="Started container" PID=2335 containerID=835395ac2f2c3fcfad6cd730aac082ac749186f23ecec252543d79de378e419c description=kube-system/storage-provisioner/storage-provisioner id=52b1f253-6939-4309-bf66-c27e44c713ee name=/runtime.v1.RuntimeService/StartContainer sandboxID=d7c2d9c4f4aceaca5712eaa499e128c51f8eadea48a3d82e886bf44658adeebb
	Jan 03 19:20:52 multinode-867906 crio[958]: time="2024-01-03 19:20:52.030066488Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-nkg7x/POD" id=407761ae-a48d-47a5-b8b6-eee5e6cb3203 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 03 19:20:52 multinode-867906 crio[958]: time="2024-01-03 19:20:52.030167632Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 03 19:20:52 multinode-867906 crio[958]: time="2024-01-03 19:20:52.046739256Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-nkg7x Namespace:default ID:7a9f9c69e9d9950feda5e4629edbcd13f32ab564849df042c942d45a7a7d52e9 UID:c7aa1d5c-05fa-4114-9e2e-721cef17f4cd NetNS:/var/run/netns/0da95c85-a5d2-4f4f-9478-6d4ddc5dbff6 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 03 19:20:52 multinode-867906 crio[958]: time="2024-01-03 19:20:52.046771573Z" level=info msg="Adding pod default_busybox-5bc68d56bd-nkg7x to CNI network \"kindnet\" (type=ptp)"
	Jan 03 19:20:52 multinode-867906 crio[958]: time="2024-01-03 19:20:52.056753415Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-nkg7x Namespace:default ID:7a9f9c69e9d9950feda5e4629edbcd13f32ab564849df042c942d45a7a7d52e9 UID:c7aa1d5c-05fa-4114-9e2e-721cef17f4cd NetNS:/var/run/netns/0da95c85-a5d2-4f4f-9478-6d4ddc5dbff6 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 03 19:20:52 multinode-867906 crio[958]: time="2024-01-03 19:20:52.056918520Z" level=info msg="Checking pod default_busybox-5bc68d56bd-nkg7x for CNI network kindnet (type=ptp)"
	Jan 03 19:20:52 multinode-867906 crio[958]: time="2024-01-03 19:20:52.081240386Z" level=info msg="Ran pod sandbox 7a9f9c69e9d9950feda5e4629edbcd13f32ab564849df042c942d45a7a7d52e9 with infra container: default/busybox-5bc68d56bd-nkg7x/POD" id=407761ae-a48d-47a5-b8b6-eee5e6cb3203 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 03 19:20:52 multinode-867906 crio[958]: time="2024-01-03 19:20:52.082382429Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=71d8db78-cc20-49c2-9066-398b498bfd97 name=/runtime.v1.ImageService/ImageStatus
	Jan 03 19:20:52 multinode-867906 crio[958]: time="2024-01-03 19:20:52.082677831Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=71d8db78-cc20-49c2-9066-398b498bfd97 name=/runtime.v1.ImageService/ImageStatus
	Jan 03 19:20:52 multinode-867906 crio[958]: time="2024-01-03 19:20:52.083498716Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=32e713e1-0aa4-4980-8f39-ee66455dc8ee name=/runtime.v1.ImageService/PullImage
	Jan 03 19:20:52 multinode-867906 crio[958]: time="2024-01-03 19:20:52.094840884Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 03 19:20:52 multinode-867906 crio[958]: time="2024-01-03 19:20:52.846969318Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 03 19:20:54 multinode-867906 crio[958]: time="2024-01-03 19:20:54.589300164Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=32e713e1-0aa4-4980-8f39-ee66455dc8ee name=/runtime.v1.ImageService/PullImage
	Jan 03 19:20:54 multinode-867906 crio[958]: time="2024-01-03 19:20:54.590356710Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=a837c62f-103b-493c-8c12-14f373c0f88d name=/runtime.v1.ImageService/ImageStatus
	Jan 03 19:20:54 multinode-867906 crio[958]: time="2024-01-03 19:20:54.591555302Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=a837c62f-103b-493c-8c12-14f373c0f88d name=/runtime.v1.ImageService/ImageStatus
	Jan 03 19:20:54 multinode-867906 crio[958]: time="2024-01-03 19:20:54.592442984Z" level=info msg="Creating container: default/busybox-5bc68d56bd-nkg7x/busybox" id=6e1df8c7-ed0a-48f4-8bb9-58977ae4f5f0 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 03 19:20:54 multinode-867906 crio[958]: time="2024-01-03 19:20:54.592548159Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 03 19:20:54 multinode-867906 crio[958]: time="2024-01-03 19:20:54.664526306Z" level=info msg="Created container 825515226ffaf86ee8b4e4b15e16f0970daeabdb26d371c8967c44052ec697e2: default/busybox-5bc68d56bd-nkg7x/busybox" id=6e1df8c7-ed0a-48f4-8bb9-58977ae4f5f0 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 03 19:20:54 multinode-867906 crio[958]: time="2024-01-03 19:20:54.665451551Z" level=info msg="Starting container: 825515226ffaf86ee8b4e4b15e16f0970daeabdb26d371c8967c44052ec697e2" id=ce2ab51d-2b4e-4c90-9823-433d1bf90dcc name=/runtime.v1.RuntimeService/StartContainer
	Jan 03 19:20:54 multinode-867906 crio[958]: time="2024-01-03 19:20:54.673089794Z" level=info msg="Started container" PID=2522 containerID=825515226ffaf86ee8b4e4b15e16f0970daeabdb26d371c8967c44052ec697e2 description=default/busybox-5bc68d56bd-nkg7x/busybox id=ce2ab51d-2b4e-4c90-9823-433d1bf90dcc name=/runtime.v1.RuntimeService/StartContainer sandboxID=7a9f9c69e9d9950feda5e4629edbcd13f32ab564849df042c942d45a7a7d52e9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	825515226ffaf       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   7a9f9c69e9d99       busybox-5bc68d56bd-nkg7x
	49a50d824c0ce       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      57 seconds ago       Running             coredns                   0                   dc35bbb3a2c80       coredns-5dd5756b68-qb6ll
	835395ac2f2c3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      57 seconds ago       Running             storage-provisioner       0                   d7c2d9c4f4ace       storage-provisioner
	ead1ed65b5bd8       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      About a minute ago   Running             kube-proxy                0                   a5151ee308733       kube-proxy-nrm8b
	052c6496ac195       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      About a minute ago   Running             kindnet-cni               0                   e880f7ee08c3f       kindnet-bzwc8
	acd2fb6f46da5       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   47300db3ba570       kube-controller-manager-multinode-867906
	8f59f5d77b69e       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   50305a86fa764       kube-apiserver-multinode-867906
	853897de42334       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   e0c9afaa46dee       kube-scheduler-multinode-867906
	f81fb3eb06812       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   70af77d02a92a       etcd-multinode-867906
	
	
	==> coredns [49a50d824c0ce3dd6f2f1be58cfc2c90882c84c7493cab10792a39e49872ea10] <==
	[INFO] 10.244.1.2:50369 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095198s
	[INFO] 10.244.0.3:60861 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108841s
	[INFO] 10.244.0.3:38138 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001807006s
	[INFO] 10.244.0.3:54960 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000083114s
	[INFO] 10.244.0.3:45491 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000066555s
	[INFO] 10.244.0.3:44345 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001309187s
	[INFO] 10.244.0.3:54336 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000058488s
	[INFO] 10.244.0.3:56885 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084833s
	[INFO] 10.244.0.3:47004 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000045819s
	[INFO] 10.244.1.2:48898 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117448s
	[INFO] 10.244.1.2:59687 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103627s
	[INFO] 10.244.1.2:43048 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078087s
	[INFO] 10.244.1.2:37314 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043245s
	[INFO] 10.244.0.3:40094 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120673s
	[INFO] 10.244.0.3:47513 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000085697s
	[INFO] 10.244.0.3:43614 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069197s
	[INFO] 10.244.0.3:58142 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043401s
	[INFO] 10.244.1.2:38869 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107187s
	[INFO] 10.244.1.2:57087 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000129331s
	[INFO] 10.244.1.2:50880 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093074s
	[INFO] 10.244.1.2:45846 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000078605s
	[INFO] 10.244.0.3:52180 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096767s
	[INFO] 10.244.0.3:45998 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089678s
	[INFO] 10.244.0.3:33021 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000056867s
	[INFO] 10.244.0.3:39627 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000065256s
	
	
	==> describe nodes <==
	Name:               multinode-867906
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-867906
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=multinode-867906
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_03T19_19_16_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 19:19:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-867906
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jan 2024 19:20:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 19:20:00 +0000   Wed, 03 Jan 2024 19:19:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 19:20:00 +0000   Wed, 03 Jan 2024 19:19:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 19:20:00 +0000   Wed, 03 Jan 2024 19:19:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 19:20:00 +0000   Wed, 03 Jan 2024 19:20:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-867906
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 26171c9889b04e34991152c3f509e8f6
	  System UUID:                d98dba2c-8dfb-4ce8-9fe2-fbdb2ef0997b
	  Boot ID:                    b5a86fc9-be37-4e1f-bbe9-b1739322b77c
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-nkg7x                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-5dd5756b68-qb6ll                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     89s
	  kube-system                 etcd-multinode-867906                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         102s
	  kube-system                 kindnet-bzwc8                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      89s
	  kube-system                 kube-apiserver-multinode-867906             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-controller-manager-multinode-867906    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 kube-proxy-nrm8b                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-scheduler-multinode-867906             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 88s   kube-proxy       
	  Normal  Starting                 103s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s  kubelet          Node multinode-867906 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s  kubelet          Node multinode-867906 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s  kubelet          Node multinode-867906 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           90s   node-controller  Node multinode-867906 event: Registered Node multinode-867906 in Controller
	  Normal  NodeReady                58s   kubelet          Node multinode-867906 status is now: NodeReady
	
	
	Name:               multinode-867906-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-867906-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=multinode-867906
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_03T19_20_18_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 19:20:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-867906-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jan 2024 19:20:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 19:20:49 +0000   Wed, 03 Jan 2024 19:20:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 19:20:49 +0000   Wed, 03 Jan 2024 19:20:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 19:20:49 +0000   Wed, 03 Jan 2024 19:20:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 19:20:49 +0000   Wed, 03 Jan 2024 19:20:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-867906-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f2a7cb6240f4145bb91fe0fa7d942c7
	  System UUID:                de3db998-dcc6-4bd9-9be6-8cb77d4ac9fa
	  Boot ID:                    b5a86fc9-be37-4e1f-bbe9-b1739322b77c
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-8j67l    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-4mkm4               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      41s
	  kube-system                 kube-proxy-d5vmq            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 40s                kube-proxy       
	  Normal  NodeHasSufficientMemory  41s (x5 over 42s)  kubelet          Node multinode-867906-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x5 over 42s)  kubelet          Node multinode-867906-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x5 over 42s)  kubelet          Node multinode-867906-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s                node-controller  Node multinode-867906-m02 event: Registered Node multinode-867906-m02 in Controller
	  Normal  NodeReady                9s                 kubelet          Node multinode-867906-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.005068] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.007977] FS-Cache: N-cookie d=000000004745ad78{9p.inode} n=0000000020f84279
	[  +0.008772] FS-Cache: N-key=[8] '8ba00f0200000000'
	[  +0.279221] FS-Cache: Duplicate cookie detected
	[  +0.004677] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006751] FS-Cache: O-cookie d=000000004745ad78{9p.inode} n=00000000084ca310
	[  +0.007401] FS-Cache: O-key=[8] '98a00f0200000000'
	[  +0.004937] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006581] FS-Cache: N-cookie d=000000004745ad78{9p.inode} n=00000000e0790b00
	[  +0.008725] FS-Cache: N-key=[8] '98a00f0200000000'
	[Jan 3 19:09] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan 3 19:11] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 26 61 f8 2e aa 36 e6 16 11 d2 10 6b 08 00
	[  +1.016130] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 26 61 f8 2e aa 36 e6 16 11 d2 10 6b 08 00
	[  +2.015807] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 61 f8 2e aa 36 e6 16 11 d2 10 6b 08 00
	[  +4.127685] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 61 f8 2e aa 36 e6 16 11 d2 10 6b 08 00
	[  +8.191385] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 61 f8 2e aa 36 e6 16 11 d2 10 6b 08 00
	[ +16.126814] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 61 f8 2e aa 36 e6 16 11 d2 10 6b 08 00
	[Jan 3 19:12] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 26 61 f8 2e aa 36 e6 16 11 d2 10 6b 08 00
	
	
	==> etcd [f81fb3eb06812a6d827480fe2cea998e06478671801944ce041f970529a9e8c9] <==
	{"level":"info","ts":"2024-01-03T19:19:10.804752Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2024-01-03T19:19:10.80484Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2024-01-03T19:19:11.291897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-03T19:19:11.291947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-03T19:19:11.291979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2024-01-03T19:19:11.291995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2024-01-03T19:19:11.292004Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-03T19:19:11.292013Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2024-01-03T19:19:11.29202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-03T19:19:11.293105Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T19:19:11.29367Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-867906 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-03T19:19:11.29368Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T19:19:11.293745Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T19:19:11.294029Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-03T19:19:11.294101Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-03T19:19:11.294249Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T19:19:11.294341Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T19:19:11.294367Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T19:19:11.294996Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-03T19:19:11.295094Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2024-01-03T19:20:07.280242Z","caller":"traceutil/trace.go:171","msg":"trace[1969592926] linearizableReadLoop","detail":"{readStateIndex:433; appliedIndex:432; }","duration":"113.372671ms","start":"2024-01-03T19:20:07.166855Z","end":"2024-01-03T19:20:07.280228Z","steps":["trace[1969592926] 'read index received'  (duration: 113.358329ms)","trace[1969592926] 'applied index is now lower than readState.Index'  (duration: 13.847µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-03T19:20:07.280361Z","caller":"traceutil/trace.go:171","msg":"trace[1119904181] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"223.552913ms","start":"2024-01-03T19:20:07.056781Z","end":"2024-01-03T19:20:07.280334Z","steps":["trace[1119904181] 'process raft request'  (duration: 223.341738ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T19:20:07.280383Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.484924ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-03T19:20:07.280429Z","caller":"traceutil/trace.go:171","msg":"trace[27662612] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:414; }","duration":"113.599026ms","start":"2024-01-03T19:20:07.166823Z","end":"2024-01-03T19:20:07.280422Z","steps":["trace[27662612] 'agreement among raft nodes before linearized reading'  (duration: 113.470471ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T19:20:07.330618Z","caller":"traceutil/trace.go:171","msg":"trace[1959969929] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"162.939197ms","start":"2024-01-03T19:20:07.167655Z","end":"2024-01-03T19:20:07.330595Z","steps":["trace[1959969929] 'process raft request'  (duration: 162.70922ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:20:59 up  1:03,  0 users,  load average: 0.47, 0.72, 0.57
	Linux multinode-867906 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [052c6496ac19546f99f335a6e1392db121e4163d33941f7181ebc81d7bd9e0ba] <==
	I0103 19:19:30.181314       1 main.go:116] setting mtu 1500 for CNI 
	I0103 19:19:30.181334       1 main.go:146] kindnetd IP family: "ipv4"
	I0103 19:19:30.181449       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0103 19:20:00.401799       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0103 19:20:00.409967       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0103 19:20:00.409999       1 main.go:227] handling current node
	I0103 19:20:10.422196       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0103 19:20:10.422227       1 main.go:227] handling current node
	I0103 19:20:20.434683       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0103 19:20:20.434705       1 main.go:227] handling current node
	I0103 19:20:20.434723       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0103 19:20:20.434729       1 main.go:250] Node multinode-867906-m02 has CIDR [10.244.1.0/24] 
	I0103 19:20:20.434880       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0103 19:20:30.447400       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0103 19:20:30.447426       1 main.go:227] handling current node
	I0103 19:20:30.447437       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0103 19:20:30.447441       1 main.go:250] Node multinode-867906-m02 has CIDR [10.244.1.0/24] 
	I0103 19:20:40.460068       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0103 19:20:40.460094       1 main.go:227] handling current node
	I0103 19:20:40.460104       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0103 19:20:40.460109       1 main.go:250] Node multinode-867906-m02 has CIDR [10.244.1.0/24] 
	I0103 19:20:50.472779       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0103 19:20:50.472804       1 main.go:227] handling current node
	I0103 19:20:50.472816       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0103 19:20:50.472821       1 main.go:250] Node multinode-867906-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [8f59f5d77b69eab3e383874f7e0a29642f64d4dbfbc662d6abde2988fc95786a] <==
	I0103 19:19:13.177345       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0103 19:19:13.177844       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0103 19:19:13.177934       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0103 19:19:13.178037       1 shared_informer.go:318] Caches are synced for configmaps
	I0103 19:19:13.180008       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0103 19:19:13.180162       1 controller.go:624] quota admission added evaluator for: namespaces
	E0103 19:19:13.186858       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0103 19:19:13.274279       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0103 19:19:13.274306       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0103 19:19:13.389240       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0103 19:19:14.045672       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0103 19:19:14.049236       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0103 19:19:14.049254       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0103 19:19:14.420544       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0103 19:19:14.451783       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0103 19:19:14.492745       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0103 19:19:14.499786       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0103 19:19:14.500639       1 controller.go:624] quota admission added evaluator for: endpoints
	I0103 19:19:14.504131       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0103 19:19:15.102450       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0103 19:19:15.829739       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0103 19:19:15.839227       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0103 19:19:15.849490       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0103 19:19:29.248463       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0103 19:19:29.499286       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [acd2fb6f46da553f56b05e508c563f3ba186fa2f8fbda6d4d89bece3c6ed9c60] <==
	I0103 19:20:00.578686       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="109.04µs"
	I0103 19:20:01.083933       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="115.394µs"
	I0103 19:20:02.096777       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.268853ms"
	I0103 19:20:02.096902       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.828µs"
	I0103 19:20:03.562396       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0103 19:20:17.794838       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-867906-m02\" does not exist"
	I0103 19:20:17.804449       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-d5vmq"
	I0103 19:20:17.804620       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-4mkm4"
	I0103 19:20:17.808004       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-867906-m02" podCIDRs=["10.244.1.0/24"]
	I0103 19:20:18.564811       1 event.go:307] "Event occurred" object="multinode-867906-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-867906-m02 event: Registered Node multinode-867906-m02 in Controller"
	I0103 19:20:18.564830       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-867906-m02"
	I0103 19:20:49.273229       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-867906-m02"
	I0103 19:20:51.709111       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0103 19:20:51.717571       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-8j67l"
	I0103 19:20:51.722904       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-nkg7x"
	I0103 19:20:51.729948       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="21.03332ms"
	I0103 19:20:51.735795       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.728127ms"
	I0103 19:20:51.735879       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="46.592µs"
	I0103 19:20:51.737387       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="66.616µs"
	I0103 19:20:51.737613       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="28.415µs"
	I0103 19:20:53.579035       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-8j67l" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-8j67l"
	I0103 19:20:55.185658       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.09262ms"
	I0103 19:20:55.185755       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="53.207µs"
	I0103 19:20:55.376284       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.097696ms"
	I0103 19:20:55.376373       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="51.453µs"
	
	
	==> kube-proxy [ead1ed65b5bd8761c2f06236e37806a2548f6082dc9d1fe9cec9c680874b84b4] <==
	I0103 19:19:30.292613       1 server_others.go:69] "Using iptables proxy"
	I0103 19:19:30.302412       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0103 19:19:30.677595       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0103 19:19:30.679729       1 server_others.go:152] "Using iptables Proxier"
	I0103 19:19:30.679762       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0103 19:19:30.679769       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0103 19:19:30.679796       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0103 19:19:30.680004       1 server.go:846] "Version info" version="v1.28.4"
	I0103 19:19:30.680014       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 19:19:30.680512       1 config.go:315] "Starting node config controller"
	I0103 19:19:30.680570       1 config.go:188] "Starting service config controller"
	I0103 19:19:30.680650       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0103 19:19:30.680598       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0103 19:19:30.680625       1 config.go:97] "Starting endpoint slice config controller"
	I0103 19:19:30.680721       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0103 19:19:30.781148       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0103 19:19:30.781183       1 shared_informer.go:318] Caches are synced for service config
	I0103 19:19:30.781194       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [853897de423341d9833d2cf2b54de7ac138ca3174cde8dbab959f1472484887c] <==
	W0103 19:19:13.201716       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0103 19:19:13.201877       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0103 19:19:13.201960       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0103 19:19:13.201975       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0103 19:19:13.203261       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0103 19:19:13.203292       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0103 19:19:13.203387       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0103 19:19:13.203408       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0103 19:19:13.203570       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0103 19:19:13.203590       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0103 19:19:13.203668       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0103 19:19:13.203684       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0103 19:19:13.203750       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0103 19:19:13.203778       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0103 19:19:14.097850       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0103 19:19:14.097878       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0103 19:19:14.108198       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0103 19:19:14.108235       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0103 19:19:14.216099       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0103 19:19:14.216128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0103 19:19:14.221456       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0103 19:19:14.221489       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0103 19:19:14.285718       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0103 19:19:14.285769       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0103 19:19:16.695721       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 03 19:19:29 multinode-867906 kubelet[1585]: I0103 19:19:29.285441    1585 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bae42292-7c63-45ab-963e-34f9ffe22674-cni-cfg\") pod \"kindnet-bzwc8\" (UID: \"bae42292-7c63-45ab-963e-34f9ffe22674\") " pod="kube-system/kindnet-bzwc8"
	Jan 03 19:19:29 multinode-867906 kubelet[1585]: I0103 19:19:29.285493    1585 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/025f5c46-e360-423d-9c4f-eee8af0472ae-xtables-lock\") pod \"kube-proxy-nrm8b\" (UID: \"025f5c46-e360-423d-9c4f-eee8af0472ae\") " pod="kube-system/kube-proxy-nrm8b"
	Jan 03 19:19:29 multinode-867906 kubelet[1585]: I0103 19:19:29.285569    1585 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bae42292-7c63-45ab-963e-34f9ffe22674-xtables-lock\") pod \"kindnet-bzwc8\" (UID: \"bae42292-7c63-45ab-963e-34f9ffe22674\") " pod="kube-system/kindnet-bzwc8"
	Jan 03 19:19:29 multinode-867906 kubelet[1585]: I0103 19:19:29.285597    1585 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld7ts\" (UniqueName: \"kubernetes.io/projected/bae42292-7c63-45ab-963e-34f9ffe22674-kube-api-access-ld7ts\") pod \"kindnet-bzwc8\" (UID: \"bae42292-7c63-45ab-963e-34f9ffe22674\") " pod="kube-system/kindnet-bzwc8"
	Jan 03 19:19:29 multinode-867906 kubelet[1585]: I0103 19:19:29.285622    1585 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bae42292-7c63-45ab-963e-34f9ffe22674-lib-modules\") pod \"kindnet-bzwc8\" (UID: \"bae42292-7c63-45ab-963e-34f9ffe22674\") " pod="kube-system/kindnet-bzwc8"
	Jan 03 19:19:29 multinode-867906 kubelet[1585]: I0103 19:19:29.285639    1585 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/025f5c46-e360-423d-9c4f-eee8af0472ae-kube-proxy\") pod \"kube-proxy-nrm8b\" (UID: \"025f5c46-e360-423d-9c4f-eee8af0472ae\") " pod="kube-system/kube-proxy-nrm8b"
	Jan 03 19:19:29 multinode-867906 kubelet[1585]: I0103 19:19:29.285670    1585 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/025f5c46-e360-423d-9c4f-eee8af0472ae-lib-modules\") pod \"kube-proxy-nrm8b\" (UID: \"025f5c46-e360-423d-9c4f-eee8af0472ae\") " pod="kube-system/kube-proxy-nrm8b"
	Jan 03 19:19:29 multinode-867906 kubelet[1585]: I0103 19:19:29.285737    1585 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgfhf\" (UniqueName: \"kubernetes.io/projected/025f5c46-e360-423d-9c4f-eee8af0472ae-kube-api-access-vgfhf\") pod \"kube-proxy-nrm8b\" (UID: \"025f5c46-e360-423d-9c4f-eee8af0472ae\") " pod="kube-system/kube-proxy-nrm8b"
	Jan 03 19:19:31 multinode-867906 kubelet[1585]: I0103 19:19:31.012000    1585 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-bzwc8" podStartSLOduration=2.011959315 podCreationTimestamp="2024-01-03 19:19:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-03 19:19:31.011699943 +0000 UTC m=+15.206791541" watchObservedRunningTime="2024-01-03 19:19:31.011959315 +0000 UTC m=+15.207050917"
	Jan 03 19:20:00 multinode-867906 kubelet[1585]: I0103 19:20:00.539885    1585 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 03 19:20:00 multinode-867906 kubelet[1585]: I0103 19:20:00.560921    1585 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-nrm8b" podStartSLOduration=31.560874082 podCreationTimestamp="2024-01-03 19:19:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-03 19:19:31.022177895 +0000 UTC m=+15.217269499" watchObservedRunningTime="2024-01-03 19:20:00.560874082 +0000 UTC m=+44.755965691"
	Jan 03 19:20:00 multinode-867906 kubelet[1585]: I0103 19:20:00.561094    1585 topology_manager.go:215] "Topology Admit Handler" podUID="2e6896b5-2324-446b-b295-0d0a2b8ad24c" podNamespace="kube-system" podName="storage-provisioner"
	Jan 03 19:20:00 multinode-867906 kubelet[1585]: I0103 19:20:00.562097    1585 topology_manager.go:215] "Topology Admit Handler" podUID="a10d6003-2e28-4c8f-a743-87a3a9e768be" podNamespace="kube-system" podName="coredns-5dd5756b68-qb6ll"
	Jan 03 19:20:00 multinode-867906 kubelet[1585]: I0103 19:20:00.610740    1585 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2e6896b5-2324-446b-b295-0d0a2b8ad24c-tmp\") pod \"storage-provisioner\" (UID: \"2e6896b5-2324-446b-b295-0d0a2b8ad24c\") " pod="kube-system/storage-provisioner"
	Jan 03 19:20:00 multinode-867906 kubelet[1585]: I0103 19:20:00.610818    1585 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a10d6003-2e28-4c8f-a743-87a3a9e768be-config-volume\") pod \"coredns-5dd5756b68-qb6ll\" (UID: \"a10d6003-2e28-4c8f-a743-87a3a9e768be\") " pod="kube-system/coredns-5dd5756b68-qb6ll"
	Jan 03 19:20:00 multinode-867906 kubelet[1585]: I0103 19:20:00.610855    1585 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwvq5\" (UniqueName: \"kubernetes.io/projected/2e6896b5-2324-446b-b295-0d0a2b8ad24c-kube-api-access-cwvq5\") pod \"storage-provisioner\" (UID: \"2e6896b5-2324-446b-b295-0d0a2b8ad24c\") " pod="kube-system/storage-provisioner"
	Jan 03 19:20:00 multinode-867906 kubelet[1585]: I0103 19:20:00.610983    1585 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfjcm\" (UniqueName: \"kubernetes.io/projected/a10d6003-2e28-4c8f-a743-87a3a9e768be-kube-api-access-vfjcm\") pod \"coredns-5dd5756b68-qb6ll\" (UID: \"a10d6003-2e28-4c8f-a743-87a3a9e768be\") " pod="kube-system/coredns-5dd5756b68-qb6ll"
	Jan 03 19:20:00 multinode-867906 kubelet[1585]: W0103 19:20:00.926998    1585 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/16b24a361d347b18a39cdf8457d9f98ce75e719a544b5914bcfae22cab236bb4/crio-d7c2d9c4f4aceaca5712eaa499e128c51f8eadea48a3d82e886bf44658adeebb WatchSource:0}: Error finding container d7c2d9c4f4aceaca5712eaa499e128c51f8eadea48a3d82e886bf44658adeebb: Status 404 returned error can't find the container with id d7c2d9c4f4aceaca5712eaa499e128c51f8eadea48a3d82e886bf44658adeebb
	Jan 03 19:20:00 multinode-867906 kubelet[1585]: W0103 19:20:00.927302    1585 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/16b24a361d347b18a39cdf8457d9f98ce75e719a544b5914bcfae22cab236bb4/crio-dc35bbb3a2c804367b3638dc959650a3595ab097333212010ef955bbf5dfd5c1 WatchSource:0}: Error finding container dc35bbb3a2c804367b3638dc959650a3595ab097333212010ef955bbf5dfd5c1: Status 404 returned error can't find the container with id dc35bbb3a2c804367b3638dc959650a3595ab097333212010ef955bbf5dfd5c1
	Jan 03 19:20:01 multinode-867906 kubelet[1585]: I0103 19:20:01.083179    1585 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-qb6ll" podStartSLOduration=32.083127633 podCreationTimestamp="2024-01-03 19:19:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-03 19:20:01.082929591 +0000 UTC m=+45.278021194" watchObservedRunningTime="2024-01-03 19:20:01.083127633 +0000 UTC m=+45.278219234"
	Jan 03 19:20:01 multinode-867906 kubelet[1585]: I0103 19:20:01.092037    1585 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.091988479 podCreationTimestamp="2024-01-03 19:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-03 19:20:01.091937975 +0000 UTC m=+45.287029575" watchObservedRunningTime="2024-01-03 19:20:01.091988479 +0000 UTC m=+45.287080079"
	Jan 03 19:20:51 multinode-867906 kubelet[1585]: I0103 19:20:51.728267    1585 topology_manager.go:215] "Topology Admit Handler" podUID="c7aa1d5c-05fa-4114-9e2e-721cef17f4cd" podNamespace="default" podName="busybox-5bc68d56bd-nkg7x"
	Jan 03 19:20:51 multinode-867906 kubelet[1585]: I0103 19:20:51.814650    1585 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t74l5\" (UniqueName: \"kubernetes.io/projected/c7aa1d5c-05fa-4114-9e2e-721cef17f4cd-kube-api-access-t74l5\") pod \"busybox-5bc68d56bd-nkg7x\" (UID: \"c7aa1d5c-05fa-4114-9e2e-721cef17f4cd\") " pod="default/busybox-5bc68d56bd-nkg7x"
	Jan 03 19:20:52 multinode-867906 kubelet[1585]: W0103 19:20:52.079082    1585 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/16b24a361d347b18a39cdf8457d9f98ce75e719a544b5914bcfae22cab236bb4/crio-7a9f9c69e9d9950feda5e4629edbcd13f32ab564849df042c942d45a7a7d52e9 WatchSource:0}: Error finding container 7a9f9c69e9d9950feda5e4629edbcd13f32ab564849df042c942d45a7a7d52e9: Status 404 returned error can't find the container with id 7a9f9c69e9d9950feda5e4629edbcd13f32ab564849df042c942d45a7a7d52e9
	Jan 03 19:20:55 multinode-867906 kubelet[1585]: I0103 19:20:55.181775    1585 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-nkg7x" podStartSLOduration=1.674775671 podCreationTimestamp="2024-01-03 19:20:51 +0000 UTC" firstStartedPulling="2024-01-03 19:20:52.08286766 +0000 UTC m=+96.277959253" lastFinishedPulling="2024-01-03 19:20:54.589825681 +0000 UTC m=+98.784917276" observedRunningTime="2024-01-03 19:20:55.181452302 +0000 UTC m=+99.376543905" watchObservedRunningTime="2024-01-03 19:20:55.181733694 +0000 UTC m=+99.376825293"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-867906 -n multinode-867906
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-867906 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.29s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (74.78s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.9.0.3843743625.exe start -p running-upgrade-972574 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.9.0.3843743625.exe start -p running-upgrade-972574 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m6.93931472s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-972574 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-972574 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2.518944351s)

                                                
                                                
-- stdout --
	* [running-upgrade-972574] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-8915/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-8915/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-972574 in cluster running-upgrade-972574
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Updating the running docker "running-upgrade-972574" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 19:32:46.942465  191967 out.go:296] Setting OutFile to fd 1 ...
	I0103 19:32:46.942751  191967 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:32:46.942761  191967 out.go:309] Setting ErrFile to fd 2...
	I0103 19:32:46.942769  191967 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:32:46.943050  191967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-8915/.minikube/bin
	I0103 19:32:46.943730  191967 out.go:303] Setting JSON to false
	I0103 19:32:46.945482  191967 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4513,"bootTime":1704305854,"procs":620,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 19:32:46.945568  191967 start.go:138] virtualization: kvm guest
	I0103 19:32:46.948338  191967 out.go:177] * [running-upgrade-972574] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 19:32:46.949947  191967 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 19:32:46.949992  191967 notify.go:220] Checking for updates...
	I0103 19:32:46.951514  191967 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 19:32:46.953093  191967 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-8915/kubeconfig
	I0103 19:32:46.954703  191967 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-8915/.minikube
	I0103 19:32:46.956058  191967 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 19:32:46.957430  191967 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 19:32:46.959202  191967 config.go:182] Loaded profile config "running-upgrade-972574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0103 19:32:46.959230  191967 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0103 19:32:46.961036  191967 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0103 19:32:46.962224  191967 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 19:32:46.985990  191967 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 19:32:46.986159  191967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 19:32:47.045589  191967 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:80 SystemTime:2024-01-03 19:32:47.035644572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0103 19:32:47.045718  191967 docker.go:295] overlay module found
	I0103 19:32:47.048624  191967 out.go:177] * Using the docker driver based on existing profile
	I0103 19:32:47.049993  191967 start.go:298] selected driver: docker
	I0103 19:32:47.050008  191967 start.go:902] validating driver "docker" against &{Name:running-upgrade-972574 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-972574 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.4 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0103 19:32:47.050077  191967 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 19:32:47.050964  191967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 19:32:47.112860  191967 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:80 SystemTime:2024-01-03 19:32:47.102855589 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0103 19:32:47.113208  191967 cni.go:84] Creating CNI manager for ""
	I0103 19:32:47.113235  191967 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0103 19:32:47.113245  191967 start_flags.go:323] config:
	{Name:running-upgrade-972574 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-972574 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.4 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I0103 19:32:47.115890  191967 out.go:177] * Starting control plane node running-upgrade-972574 in cluster running-upgrade-972574
	I0103 19:32:47.117218  191967 cache.go:121] Beginning downloading kic base image for docker with crio
	I0103 19:32:47.118693  191967 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0103 19:32:47.119895  191967 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0103 19:32:47.119924  191967 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 19:32:47.138899  191967 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0103 19:32:47.138920  191967 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	W0103 19:32:47.525545  191967 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0103 19:32:47.525770  191967 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/running-upgrade-972574/config.json ...
	I0103 19:32:47.525797  191967 cache.go:107] acquiring lock: {Name:mk32813cf004365a08f9b0a08d727ad520adffb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:32:47.525839  191967 cache.go:107] acquiring lock: {Name:mkfa83897f799bfd5c19a3e7f7fe8f2de0ba2d77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:32:47.525862  191967 cache.go:107] acquiring lock: {Name:mk960de07345c85e18d5da664117aa14bdc27181 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:32:47.525908  191967 cache.go:115] /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0103 19:32:47.525924  191967 cache.go:115] /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0103 19:32:47.525924  191967 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 142.728µs
	I0103 19:32:47.525936  191967 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 107.366µs
	I0103 19:32:47.525953  191967 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0103 19:32:47.525941  191967 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0103 19:32:47.525801  191967 cache.go:107] acquiring lock: {Name:mk64297fb05189f285cd28934f755730eac84699 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:32:47.525957  191967 cache.go:107] acquiring lock: {Name:mk38867322d922995bdeb28cf6e00c4803d0cb10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:32:47.525998  191967 cache.go:115] /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0103 19:32:47.525988  191967 cache.go:107] acquiring lock: {Name:mkbefc6b6d2efb63abbd954fce9bcd53965a9fd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:32:47.526006  191967 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 212.456µs
	I0103 19:32:47.526015  191967 cache.go:115] /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0103 19:32:47.526024  191967 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 68.659µs
	I0103 19:32:47.526038  191967 cache.go:115] /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0103 19:32:47.526015  191967 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0103 19:32:47.526031  191967 cache.go:107] acquiring lock: {Name:mk8c3cb8ce52f6a42ba80ea10b799502fe274a0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:32:47.526047  191967 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 63.795µs
	I0103 19:32:47.526057  191967 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0103 19:32:47.526039  191967 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0103 19:32:47.525944  191967 cache.go:115] /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0103 19:32:47.526073  191967 cache.go:115] /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0103 19:32:47.526077  191967 cache.go:194] Successfully downloaded all kic artifacts
	I0103 19:32:47.526082  191967 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 53.461µs
	I0103 19:32:47.526093  191967 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0103 19:32:47.526078  191967 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 218.879µs
	I0103 19:32:47.526120  191967 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0103 19:32:47.526121  191967 start.go:365] acquiring machines lock for running-upgrade-972574: {Name:mkd7dc4ec60c77f1505247e97e4654fd91f56252 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:32:47.525957  191967 cache.go:107] acquiring lock: {Name:mk0e0ab0a315accf565161a8416d169c5e875674 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:32:47.526301  191967 cache.go:115] /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0103 19:32:47.526316  191967 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 361.286µs
	I0103 19:32:47.526330  191967 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0103 19:32:47.526344  191967 cache.go:87] Successfully saved all images to host disk.
	I0103 19:32:47.526359  191967 start.go:369] acquired machines lock for "running-upgrade-972574" in 108.931µs
	I0103 19:32:47.526391  191967 start.go:96] Skipping create...Using existing machine configuration
	I0103 19:32:47.526401  191967 fix.go:54] fixHost starting: m01
	I0103 19:32:47.526687  191967 cli_runner.go:164] Run: docker container inspect running-upgrade-972574 --format={{.State.Status}}
	I0103 19:32:47.544942  191967 fix.go:102] recreateIfNeeded on running-upgrade-972574: state=Running err=<nil>
	W0103 19:32:47.544984  191967 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 19:32:47.547206  191967 out.go:177] * Updating the running docker "running-upgrade-972574" container ...
	I0103 19:32:47.548639  191967 machine.go:88] provisioning docker machine ...
	I0103 19:32:47.548673  191967 ubuntu.go:169] provisioning hostname "running-upgrade-972574"
	I0103 19:32:47.548735  191967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-972574
	I0103 19:32:47.564462  191967 main.go:141] libmachine: Using SSH client type: native
	I0103 19:32:47.564846  191967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 127.0.0.1 32951 <nil> <nil>}
	I0103 19:32:47.564862  191967 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-972574 && echo "running-upgrade-972574" | sudo tee /etc/hostname
	I0103 19:32:47.678318  191967 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-972574
	
	I0103 19:32:47.678407  191967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-972574
	I0103 19:32:47.696231  191967 main.go:141] libmachine: Using SSH client type: native
	I0103 19:32:47.696551  191967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 127.0.0.1 32951 <nil> <nil>}
	I0103 19:32:47.696569  191967 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-972574' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-972574/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-972574' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 19:32:47.802094  191967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 19:32:47.802122  191967 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17885-8915/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-8915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-8915/.minikube}
	I0103 19:32:47.802165  191967 ubuntu.go:177] setting up certificates
	I0103 19:32:47.802180  191967 provision.go:83] configureAuth start
	I0103 19:32:47.802233  191967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-972574
	I0103 19:32:47.818777  191967 provision.go:138] copyHostCerts
	I0103 19:32:47.818853  191967 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-8915/.minikube/cert.pem, removing ...
	I0103 19:32:47.818871  191967 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-8915/.minikube/cert.pem
	I0103 19:32:47.818961  191967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-8915/.minikube/cert.pem (1123 bytes)
	I0103 19:32:47.819097  191967 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-8915/.minikube/key.pem, removing ...
	I0103 19:32:47.819111  191967 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-8915/.minikube/key.pem
	I0103 19:32:47.819152  191967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-8915/.minikube/key.pem (1679 bytes)
	I0103 19:32:47.819244  191967 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-8915/.minikube/ca.pem, removing ...
	I0103 19:32:47.819256  191967 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-8915/.minikube/ca.pem
	I0103 19:32:47.819293  191967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-8915/.minikube/ca.pem (1078 bytes)
	I0103 19:32:47.819371  191967 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-8915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-972574 san=[172.17.0.4 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-972574]
	I0103 19:32:48.016148  191967 provision.go:172] copyRemoteCerts
	I0103 19:32:48.016202  191967 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 19:32:48.016241  191967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-972574
	I0103 19:32:48.033503  191967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32951 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/running-upgrade-972574/id_rsa Username:docker}
	I0103 19:32:48.113374  191967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 19:32:48.130961  191967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0103 19:32:48.148223  191967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 19:32:48.165332  191967 provision.go:86] duration metric: configureAuth took 363.140566ms
	I0103 19:32:48.165354  191967 ubuntu.go:193] setting minikube options for container-runtime
	I0103 19:32:48.165502  191967 config.go:182] Loaded profile config "running-upgrade-972574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0103 19:32:48.165585  191967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-972574
	I0103 19:32:48.182479  191967 main.go:141] libmachine: Using SSH client type: native
	I0103 19:32:48.182817  191967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 127.0.0.1 32951 <nil> <nil>}
	I0103 19:32:48.182836  191967 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 19:32:48.569575  191967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 19:32:48.569599  191967 machine.go:91] provisioned docker machine in 1.020945308s
	I0103 19:32:48.569608  191967 start.go:300] post-start starting for "running-upgrade-972574" (driver="docker")
	I0103 19:32:48.569616  191967 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 19:32:48.569696  191967 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 19:32:48.569735  191967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-972574
	I0103 19:32:48.587454  191967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32951 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/running-upgrade-972574/id_rsa Username:docker}
	I0103 19:32:48.665481  191967 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 19:32:48.668382  191967 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0103 19:32:48.668407  191967 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0103 19:32:48.668420  191967 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0103 19:32:48.668429  191967 info.go:137] Remote host: Ubuntu 19.10
	I0103 19:32:48.668441  191967 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-8915/.minikube/addons for local assets ...
	I0103 19:32:48.668496  191967 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-8915/.minikube/files for local assets ...
	I0103 19:32:48.668600  191967 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/ssl/certs/156702.pem -> 156702.pem in /etc/ssl/certs
	I0103 19:32:48.668723  191967 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 19:32:48.675595  191967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/ssl/certs/156702.pem --> /etc/ssl/certs/156702.pem (1708 bytes)
	I0103 19:32:48.693478  191967 start.go:303] post-start completed in 123.854944ms
	I0103 19:32:48.693565  191967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 19:32:48.693610  191967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-972574
	I0103 19:32:48.709876  191967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32951 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/running-upgrade-972574/id_rsa Username:docker}
	I0103 19:32:48.791228  191967 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0103 19:32:48.795148  191967 fix.go:56] fixHost completed within 1.268743361s
	I0103 19:32:48.795173  191967 start.go:83] releasing machines lock for "running-upgrade-972574", held for 1.268798807s
	I0103 19:32:48.795238  191967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-972574
	I0103 19:32:48.812719  191967 ssh_runner.go:195] Run: cat /version.json
	I0103 19:32:48.812759  191967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-972574
	I0103 19:32:48.812854  191967 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 19:32:48.812927  191967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-972574
	I0103 19:32:48.830636  191967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32951 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/running-upgrade-972574/id_rsa Username:docker}
	I0103 19:32:48.831128  191967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32951 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/running-upgrade-972574/id_rsa Username:docker}
	W0103 19:32:48.935429  191967 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0103 19:32:48.935491  191967 ssh_runner.go:195] Run: systemctl --version
	I0103 19:32:48.939739  191967 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 19:32:48.989423  191967 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 19:32:48.993918  191967 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 19:32:49.009148  191967 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0103 19:32:49.009202  191967 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 19:32:49.030049  191967 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 19:32:49.030074  191967 start.go:475] detecting cgroup driver to use...
	I0103 19:32:49.030101  191967 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 19:32:49.030165  191967 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 19:32:49.051691  191967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 19:32:49.060360  191967 docker.go:203] disabling cri-docker service (if available) ...
	I0103 19:32:49.060444  191967 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 19:32:49.069032  191967 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 19:32:49.077286  191967 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0103 19:32:49.085941  191967 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0103 19:32:49.085985  191967 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 19:32:49.162414  191967 docker.go:219] disabling docker service ...
	I0103 19:32:49.162481  191967 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 19:32:49.171754  191967 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 19:32:49.181147  191967 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 19:32:49.264562  191967 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 19:32:49.359371  191967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 19:32:49.368776  191967 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 19:32:49.382359  191967 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0103 19:32:49.382417  191967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:32:49.392975  191967 out.go:177] 
	W0103 19:32:49.394348  191967 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0103 19:32:49.394371  191967 out.go:239] * 
	* 
	W0103 19:32:49.395362  191967 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0103 19:32:49.397104  191967 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-972574 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2024-01-03 19:32:49.425251828 +0000 UTC m=+2138.519388927
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-972574
helpers_test.go:235: (dbg) docker inspect running-upgrade-972574:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b8241f226cd4780edcb3fb4ea1bb2caf362cafc07de3cb603745520f781793d2",
	        "Created": "2024-01-03T19:31:40.49946632Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 173038,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-03T19:31:41.63848695Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/b8241f226cd4780edcb3fb4ea1bb2caf362cafc07de3cb603745520f781793d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b8241f226cd4780edcb3fb4ea1bb2caf362cafc07de3cb603745520f781793d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/b8241f226cd4780edcb3fb4ea1bb2caf362cafc07de3cb603745520f781793d2/hosts",
	        "LogPath": "/var/lib/docker/containers/b8241f226cd4780edcb3fb4ea1bb2caf362cafc07de3cb603745520f781793d2/b8241f226cd4780edcb3fb4ea1bb2caf362cafc07de3cb603745520f781793d2-json.log",
	        "Name": "/running-upgrade-972574",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-972574:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5ef5cccda2292889a4944c710cc639a62d2fa241724317ce873bc3ac77e69105-init/diff:/var/lib/docker/overlay2/31d33b19dcbfc2bafb062b13c27fb2e2b6dbb939d9935065b28f1d6a86187905/diff:/var/lib/docker/overlay2/64993b9a806f86bbd9d6f2b17a4ace3627f89fbe413898d80eb0f9ebac5dcc46/diff:/var/lib/docker/overlay2/09131028490f1407995389d591c2e37b8bb3f8b21b50c05e7bc45e4e71a1ee35/diff:/var/lib/docker/overlay2/2d0a1d6221677dedecbe6295467ef574a5e7209b9f057832602f61784dc3b21a/diff:/var/lib/docker/overlay2/ea48a586e53d61618b17fd5b2e00e94eff5d5ab068cbc0c87b6d51e6e5149150/diff:/var/lib/docker/overlay2/87234a72e756f6ccb04a8f4f34d33deee9541e814e92e251f3a83848849d8d46/diff:/var/lib/docker/overlay2/77e1a95f02b019d6c921f3d966559c183925ef0cfa4763f395889b4f97d93693/diff:/var/lib/docker/overlay2/86d1b1539c9d2b1f82633b8033823f4da87f92ad5e5b0c41f18620105b853ead/diff:/var/lib/docker/overlay2/e40c08956e65eb6e8f7a3bc1478c6dd9e3e33b0f85d87b2a03d039515614fa8c/diff:/var/lib/docker/overlay2/9825f7
7319e8726f9f572a2015981f52a9fc02d176c6708ea37341b75fd1da17/diff:/var/lib/docker/overlay2/6b18e6a40f03b20c5af9a66e5157dff8ab15b36076518b2bd910acd34e33a586/diff:/var/lib/docker/overlay2/8db1545e52dea817bad94c18ef19d15194452ee036b95b68c70230e65fbef184/diff:/var/lib/docker/overlay2/d8f8069c4a4e2dc6f7b11c1977f4cc2ea4d615a7c68dc7f7fb54a95bfa861e3b/diff:/var/lib/docker/overlay2/a8b95fab57caab56ad0f2ba84f189d7c3ddc25dcc322fb951b2a3a79fd6a3401/diff:/var/lib/docker/overlay2/041ec458a82d7105b41af132252197c04067302150579caaeb3836f6f5d4e5c6/diff:/var/lib/docker/overlay2/46da2c41b05c4b03c14744d5506f1ffaa6c09b46d70414b33cf792e5856129d0/diff:/var/lib/docker/overlay2/35b6cfac166a46bbb4b7cb73327fa1aba7b4fcea0f3798f507bb78b8f6fd15bc/diff:/var/lib/docker/overlay2/0849bd59dee54bf98969a8d1fffc6ff919cd2980c02e55b5dfa161e70f92e7bd/diff:/var/lib/docker/overlay2/14a917fb3381ca94c97a0e0b8826baef2ab26ad645349111a3c73a9034922110/diff:/var/lib/docker/overlay2/b066933e8721a4f3c801f228864233eb8fe96ade063eec27a301352248fc3166/diff:/var/lib/d
ocker/overlay2/ed24f01dd0cf58af02c9ca8a292c8f3f31e192d0307315f5b1e973918493353c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5ef5cccda2292889a4944c710cc639a62d2fa241724317ce873bc3ac77e69105/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5ef5cccda2292889a4944c710cc639a62d2fa241724317ce873bc3ac77e69105/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5ef5cccda2292889a4944c710cc639a62d2fa241724317ce873bc3ac77e69105/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-972574",
	                "Source": "/var/lib/docker/volumes/running-upgrade-972574/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-972574",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-972574",
	                "name.minikube.sigs.k8s.io": "running-upgrade-972574",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7bc55f6873be7cb2685589bf81c2caac34d02000249d436f096aa477fff0112d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32951"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32950"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32949"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7bc55f6873be",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "49228656687f2134ad59f386d4e2600e760e1ac04cb23727b1d9815a9f371cd4",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.4",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:04",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "80c13b79f14c076dedcf386b2a637955d2a7bacbc883ee99dd694ac064514172",
	                    "EndpointID": "49228656687f2134ad59f386d4e2600e760e1ac04cb23727b1d9815a9f371cd4",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.4",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:04",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-972574 -n running-upgrade-972574
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-972574 -n running-upgrade-972574: exit status 4 (325.820433ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 19:32:49.737146  192683 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-972574" does not appear in /home/jenkins/minikube-integration/17885-8915/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-972574" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-972574" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-972574
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-972574: (2.635790787s)
--- FAIL: TestRunningBinaryUpgrade (74.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (107.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.9.0.3788688976.exe start -p stopped-upgrade-279760 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0103 19:30:41.654401   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.9.0.3788688976.exe start -p stopped-upgrade-279760 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m38.359274086s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.9.0.3788688976.exe -p stopped-upgrade-279760 stop
E0103 19:31:50.908685   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.9.0.3788688976.exe -p stopped-upgrade-279760 stop: (2.131463003s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-279760 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-279760 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.823463227s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-279760] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-8915/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-8915/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-279760 in cluster stopped-upgrade-279760
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Restarting existing docker container for "stopped-upgrade-279760" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 19:31:51.692385  177187 out.go:296] Setting OutFile to fd 1 ...
	I0103 19:31:51.692488  177187 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:31:51.692492  177187 out.go:309] Setting ErrFile to fd 2...
	I0103 19:31:51.692497  177187 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:31:51.692717  177187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-8915/.minikube/bin
	I0103 19:31:51.693266  177187 out.go:303] Setting JSON to false
	I0103 19:31:51.694896  177187 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4458,"bootTime":1704305854,"procs":698,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 19:31:51.694963  177187 start.go:138] virtualization: kvm guest
	I0103 19:31:51.697447  177187 out.go:177] * [stopped-upgrade-279760] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 19:31:51.699064  177187 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 19:31:51.699035  177187 notify.go:220] Checking for updates...
	I0103 19:31:51.700637  177187 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 19:31:51.702161  177187 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-8915/kubeconfig
	I0103 19:31:51.703928  177187 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-8915/.minikube
	I0103 19:31:51.705400  177187 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 19:31:51.710321  177187 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 19:31:51.712362  177187 config.go:182] Loaded profile config "stopped-upgrade-279760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0103 19:31:51.712395  177187 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0103 19:31:51.714531  177187 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0103 19:31:51.715984  177187 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 19:31:51.746825  177187 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 19:31:51.746948  177187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 19:31:51.816341  177187 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:106 SystemTime:2024-01-03 19:31:51.805388364 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Arch
itecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<n
il> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0103 19:31:51.816483  177187 docker.go:295] overlay module found
	I0103 19:31:51.819694  177187 out.go:177] * Using the docker driver based on existing profile
	I0103 19:31:51.821104  177187 start.go:298] selected driver: docker
	I0103 19:31:51.821125  177187 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-279760 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-279760 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0103 19:31:51.821228  177187 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 19:31:51.822471  177187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 19:31:51.930894  177187 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:106 SystemTime:2024-01-03 19:31:51.921442082 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Arch
itecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<n
il> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0103 19:31:51.931174  177187 cni.go:84] Creating CNI manager for ""
	I0103 19:31:51.931196  177187 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0103 19:31:51.931204  177187 start_flags.go:323] config:
	{Name:stopped-upgrade-279760 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-279760 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I0103 19:31:51.933216  177187 out.go:177] * Starting control plane node stopped-upgrade-279760 in cluster stopped-upgrade-279760
	I0103 19:31:51.934788  177187 cache.go:121] Beginning downloading kic base image for docker with crio
	I0103 19:31:51.936374  177187 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0103 19:31:51.937729  177187 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0103 19:31:51.937823  177187 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 19:31:51.954890  177187 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0103 19:31:51.954915  177187 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	W0103 19:31:52.274415  177187 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0103 19:31:52.274554  177187 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/stopped-upgrade-279760/config.json ...
	I0103 19:31:52.274685  177187 cache.go:107] acquiring lock: {Name:mkbefc6b6d2efb63abbd954fce9bcd53965a9fd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:31:52.274735  177187 cache.go:107] acquiring lock: {Name:mk32813cf004365a08f9b0a08d727ad520adffb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:31:52.274770  177187 cache.go:107] acquiring lock: {Name:mk38867322d922995bdeb28cf6e00c4803d0cb10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:31:52.274777  177187 cache.go:107] acquiring lock: {Name:mk960de07345c85e18d5da664117aa14bdc27181 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:31:52.274784  177187 cache.go:107] acquiring lock: {Name:mkfa83897f799bfd5c19a3e7f7fe8f2de0ba2d77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:31:52.274770  177187 cache.go:107] acquiring lock: {Name:mk0e0ab0a315accf565161a8416d169c5e875674 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:31:52.274842  177187 cache.go:194] Successfully downloaded all kic artifacts
	I0103 19:31:52.274871  177187 start.go:365] acquiring machines lock for stopped-upgrade-279760: {Name:mk896bb5c3483339c74e4da68b0053e7fa629483 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:31:52.274877  177187 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.0
	I0103 19:31:52.274927  177187 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0103 19:31:52.274944  177187 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0103 19:31:52.274948  177187 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.0
	I0103 19:31:52.274989  177187 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.0
	I0103 19:31:52.274994  177187 start.go:369] acquired machines lock for "stopped-upgrade-279760" in 113.183µs
	I0103 19:31:52.274998  177187 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.0
	I0103 19:31:52.275010  177187 start.go:96] Skipping create...Using existing machine configuration
	I0103 19:31:52.275017  177187 fix.go:54] fixHost starting: m01
	I0103 19:31:52.274683  177187 cache.go:107] acquiring lock: {Name:mk64297fb05189f285cd28934f755730eac84699 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:31:52.275079  177187 cache.go:107] acquiring lock: {Name:mk8c3cb8ce52f6a42ba80ea10b799502fe274a0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:31:52.275088  177187 cache.go:115] /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0103 19:31:52.275131  177187 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 460.268µs
	I0103 19:31:52.275151  177187 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0103 19:31:52.275157  177187 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0103 19:31:52.275301  177187 cli_runner.go:164] Run: docker container inspect stopped-upgrade-279760 --format={{.State.Status}}
	I0103 19:31:52.276110  177187 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0103 19:31:52.276124  177187 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.0
	I0103 19:31:52.276113  177187 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0103 19:31:52.276143  177187 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.0
	I0103 19:31:52.276164  177187 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0103 19:31:52.276165  177187 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.0
	I0103 19:31:52.276146  177187 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.0
	I0103 19:31:52.298772  177187 fix.go:102] recreateIfNeeded on stopped-upgrade-279760: state=Stopped err=<nil>
	W0103 19:31:52.298801  177187 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 19:31:52.301225  177187 out.go:177] * Restarting existing docker container for "stopped-upgrade-279760" ...
	I0103 19:31:52.302658  177187 cli_runner.go:164] Run: docker start stopped-upgrade-279760
	I0103 19:31:52.414178  177187 cache.go:162] opening:  /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0103 19:31:52.418568  177187 cache.go:162] opening:  /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0103 19:31:52.450292  177187 cache.go:162] opening:  /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0
	I0103 19:31:52.453431  177187 cache.go:162] opening:  /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0
	I0103 19:31:52.454441  177187 cache.go:162] opening:  /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0
	I0103 19:31:52.455366  177187 cache.go:162] opening:  /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0103 19:31:52.486303  177187 cache.go:162] opening:  /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0
	I0103 19:31:52.525814  177187 cache.go:157] /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0103 19:31:52.525838  177187 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 251.088278ms
	I0103 19:31:52.525849  177187 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0103 19:31:52.580039  177187 cli_runner.go:164] Run: docker container inspect stopped-upgrade-279760 --format={{.State.Status}}
	I0103 19:31:52.600011  177187 kic.go:430] container "stopped-upgrade-279760" state is running.
	I0103 19:31:52.614799  177187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-279760
	I0103 19:31:52.637611  177187 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/stopped-upgrade-279760/config.json ...
	I0103 19:31:52.682819  177187 machine.go:88] provisioning docker machine ...
	I0103 19:31:52.682915  177187 ubuntu.go:169] provisioning hostname "stopped-upgrade-279760"
	I0103 19:31:52.682976  177187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-279760
	I0103 19:31:52.703642  177187 main.go:141] libmachine: Using SSH client type: native
	I0103 19:31:52.704005  177187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 127.0.0.1 32954 <nil> <nil>}
	I0103 19:31:52.704025  177187 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-279760 && echo "stopped-upgrade-279760" | sudo tee /etc/hostname
	I0103 19:31:52.704752  177187 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55160->127.0.0.1:32954: read: connection reset by peer
	I0103 19:31:52.984296  177187 cache.go:157] /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0103 19:31:52.984336  177187 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 709.259263ms
	I0103 19:31:52.984365  177187 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0103 19:31:53.581828  177187 cache.go:157] /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0103 19:31:53.581857  177187 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 1.307182666s
	I0103 19:31:53.581873  177187 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0103 19:31:53.716277  177187 cache.go:157] /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0103 19:31:53.716303  177187 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 1.441576822s
	I0103 19:31:53.716317  177187 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0103 19:31:53.919142  177187 cache.go:157] /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0103 19:31:53.919173  177187 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 1.644465679s
	I0103 19:31:53.919187  177187 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0103 19:31:54.412115  177187 cache.go:157] /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0103 19:31:54.412144  177187 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 2.137373281s
	I0103 19:31:54.412155  177187 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0103 19:31:54.556586  177187 cache.go:157] /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0103 19:31:54.556618  177187 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 2.281849614s
	I0103 19:31:54.556633  177187 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17885-8915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0103 19:31:54.556652  177187 cache.go:87] Successfully saved all images to host disk.
	I0103 19:31:55.838877  177187 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-279760
	
	I0103 19:31:55.838940  177187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-279760
	I0103 19:31:55.859522  177187 main.go:141] libmachine: Using SSH client type: native
	I0103 19:31:55.859894  177187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 127.0.0.1 32954 <nil> <nil>}
	I0103 19:31:55.859919  177187 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-279760' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-279760/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-279760' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 19:31:55.994356  177187 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 19:31:55.994395  177187 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17885-8915/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-8915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-8915/.minikube}
	I0103 19:31:55.994439  177187 ubuntu.go:177] setting up certificates
	I0103 19:31:55.994457  177187 provision.go:83] configureAuth start
	I0103 19:31:55.994894  177187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-279760
	I0103 19:31:56.019663  177187 provision.go:138] copyHostCerts
	I0103 19:31:56.019752  177187 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-8915/.minikube/key.pem, removing ...
	I0103 19:31:56.019774  177187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-8915/.minikube/key.pem
	I0103 19:31:56.019855  177187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-8915/.minikube/key.pem (1679 bytes)
	I0103 19:31:56.020058  177187 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-8915/.minikube/ca.pem, removing ...
	I0103 19:31:56.020075  177187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-8915/.minikube/ca.pem
	I0103 19:31:56.020119  177187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-8915/.minikube/ca.pem (1078 bytes)
	I0103 19:31:56.020221  177187 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-8915/.minikube/cert.pem, removing ...
	I0103 19:31:56.020236  177187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-8915/.minikube/cert.pem
	I0103 19:31:56.020274  177187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-8915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-8915/.minikube/cert.pem (1123 bytes)
	I0103 19:31:56.020365  177187 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-8915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-279760 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-279760]
	I0103 19:31:56.311584  177187 provision.go:172] copyRemoteCerts
	I0103 19:31:56.311649  177187 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 19:31:56.311691  177187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-279760
	I0103 19:31:56.330858  177187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/stopped-upgrade-279760/id_rsa Username:docker}
	I0103 19:31:56.424416  177187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 19:31:56.445844  177187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 19:31:56.465749  177187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0103 19:31:56.503966  177187 provision.go:86] duration metric: configureAuth took 509.491059ms
	I0103 19:31:56.503997  177187 ubuntu.go:193] setting minikube options for container-runtime
	I0103 19:31:56.504210  177187 config.go:182] Loaded profile config "stopped-upgrade-279760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0103 19:31:56.504331  177187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-279760
	I0103 19:31:56.527427  177187 main.go:141] libmachine: Using SSH client type: native
	I0103 19:31:56.527934  177187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809a80] 0x80c760 <nil>  [] 0s} 127.0.0.1 32954 <nil> <nil>}
	I0103 19:31:56.527965  177187 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 19:31:57.513184  177187 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 19:31:57.513218  177187 machine.go:91] provisioned docker machine in 4.83037751s
	I0103 19:31:57.513230  177187 start.go:300] post-start starting for "stopped-upgrade-279760" (driver="docker")
	I0103 19:31:57.513242  177187 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 19:31:57.513307  177187 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 19:31:57.513344  177187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-279760
	I0103 19:31:57.531513  177187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/stopped-upgrade-279760/id_rsa Username:docker}
	I0103 19:31:57.624808  177187 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 19:31:57.627918  177187 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0103 19:31:57.627948  177187 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0103 19:31:57.627962  177187 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0103 19:31:57.627971  177187 info.go:137] Remote host: Ubuntu 19.10
	I0103 19:31:57.627981  177187 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-8915/.minikube/addons for local assets ...
	I0103 19:31:57.628049  177187 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-8915/.minikube/files for local assets ...
	I0103 19:31:57.628139  177187 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/ssl/certs/156702.pem -> 156702.pem in /etc/ssl/certs
	I0103 19:31:57.628234  177187 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 19:31:57.635608  177187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/ssl/certs/156702.pem --> /etc/ssl/certs/156702.pem (1708 bytes)
	I0103 19:31:57.652938  177187 start.go:303] post-start completed in 139.692004ms
	I0103 19:31:57.653022  177187 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 19:31:57.653066  177187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-279760
	I0103 19:31:57.671643  177187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/stopped-upgrade-279760/id_rsa Username:docker}
	I0103 19:31:57.751281  177187 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0103 19:31:57.755929  177187 fix.go:56] fixHost completed within 5.480901201s
	I0103 19:31:57.755957  177187 start.go:83] releasing machines lock for "stopped-upgrade-279760", held for 5.480952065s
	I0103 19:31:57.756023  177187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-279760
	I0103 19:31:57.775295  177187 ssh_runner.go:195] Run: cat /version.json
	I0103 19:31:57.775341  177187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-279760
	I0103 19:31:57.775376  177187 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 19:31:57.775455  177187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-279760
	I0103 19:31:57.794783  177187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/stopped-upgrade-279760/id_rsa Username:docker}
	I0103 19:31:57.798990  177187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/stopped-upgrade-279760/id_rsa Username:docker}
	W0103 19:31:57.881852  177187 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0103 19:31:57.881931  177187 ssh_runner.go:195] Run: systemctl --version
	I0103 19:31:57.914626  177187 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 19:31:57.971726  177187 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 19:31:57.976351  177187 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 19:31:58.044945  177187 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0103 19:31:58.045030  177187 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 19:31:58.074332  177187 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 19:31:58.074356  177187 start.go:475] detecting cgroup driver to use...
	I0103 19:31:58.074388  177187 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 19:31:58.074437  177187 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 19:31:58.099781  177187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 19:31:58.110257  177187 docker.go:203] disabling cri-docker service (if available) ...
	I0103 19:31:58.110313  177187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 19:31:58.124067  177187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 19:31:58.136060  177187 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0103 19:31:58.145652  177187 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0103 19:31:58.145704  177187 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 19:31:58.226579  177187 docker.go:219] disabling docker service ...
	I0103 19:31:58.226663  177187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 19:31:58.237467  177187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 19:31:58.249023  177187 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 19:31:58.343641  177187 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 19:31:58.418443  177187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 19:31:58.427840  177187 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 19:31:58.440269  177187 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0103 19:31:58.440318  177187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:31:58.450091  177187 out.go:177] 
	W0103 19:31:58.451558  177187 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0103 19:31:58.451581  177187 out.go:239] * 
	* 
	W0103 19:31:58.452433  177187 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0103 19:31:58.454302  177187 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-279760 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (107.32s)

                                                
                                    

Test pass (284/316)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 39.85
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.4/json-events 43.59
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.07
17 TestDownloadOnly/v1.29.0-rc.2/json-events 42.56
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.2
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
25 TestDownloadOnlyKic 1.29
26 TestBinaryMirror 0.73
27 TestOffline 88.53
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
32 TestAddons/Setup 151.18
34 TestAddons/parallel/Registry 16.51
36 TestAddons/parallel/InspektorGadget 10.66
37 TestAddons/parallel/MetricsServer 5.6
38 TestAddons/parallel/HelmTiller 11.23
40 TestAddons/parallel/CSI 65.55
41 TestAddons/parallel/Headlamp 18.74
42 TestAddons/parallel/CloudSpanner 5.65
43 TestAddons/parallel/LocalPath 58.03
44 TestAddons/parallel/NvidiaDevicePlugin 6.62
45 TestAddons/parallel/Yakd 6
48 TestAddons/serial/GCPAuth/Namespaces 0.13
49 TestAddons/StoppedEnableDisable 12.16
50 TestCertOptions 29.61
51 TestCertExpiration 227.36
53 TestForceSystemdFlag 26.91
54 TestForceSystemdEnv 40.62
56 TestKVMDriverInstallOrUpdate 4.84
60 TestErrorSpam/setup 20.7
61 TestErrorSpam/start 0.61
62 TestErrorSpam/status 0.85
63 TestErrorSpam/pause 1.47
64 TestErrorSpam/unpause 1.47
65 TestErrorSpam/stop 1.4
68 TestFunctional/serial/CopySyncFile 0
69 TestFunctional/serial/StartWithProxy 65.21
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 29.28
72 TestFunctional/serial/KubeContext 0.04
73 TestFunctional/serial/KubectlGetPods 0.08
76 TestFunctional/serial/CacheCmd/cache/add_remote 2.63
77 TestFunctional/serial/CacheCmd/cache/add_local 1.93
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
79 TestFunctional/serial/CacheCmd/cache/list 0.06
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
81 TestFunctional/serial/CacheCmd/cache/cache_reload 1.59
82 TestFunctional/serial/CacheCmd/cache/delete 0.12
83 TestFunctional/serial/MinikubeKubectlCmd 0.12
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
85 TestFunctional/serial/ExtraConfig 33.49
86 TestFunctional/serial/ComponentHealth 0.07
87 TestFunctional/serial/LogsCmd 1.33
88 TestFunctional/serial/LogsFileCmd 1.34
89 TestFunctional/serial/InvalidService 4.26
91 TestFunctional/parallel/ConfigCmd 0.47
92 TestFunctional/parallel/DashboardCmd 22.21
93 TestFunctional/parallel/DryRun 0.4
94 TestFunctional/parallel/InternationalLanguage 0.19
95 TestFunctional/parallel/StatusCmd 1.07
99 TestFunctional/parallel/ServiceCmdConnect 18.68
100 TestFunctional/parallel/AddonsCmd 0.16
101 TestFunctional/parallel/PersistentVolumeClaim 44.46
103 TestFunctional/parallel/SSHCmd 0.61
104 TestFunctional/parallel/CpCmd 1.62
105 TestFunctional/parallel/MySQL 22.13
106 TestFunctional/parallel/FileSync 0.29
107 TestFunctional/parallel/CertSync 1.72
111 TestFunctional/parallel/NodeLabels 0.08
113 TestFunctional/parallel/NonActiveRuntimeDisabled 0.61
115 TestFunctional/parallel/License 0.64
116 TestFunctional/parallel/Version/short 0.06
117 TestFunctional/parallel/Version/components 0.47
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
122 TestFunctional/parallel/ImageCommands/ImageBuild 3.11
123 TestFunctional/parallel/ImageCommands/Setup 2.07
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 22.21
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 8.42
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.31
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 10.47
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
136 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
140 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
141 TestFunctional/parallel/MountCmd/any-port 9.19
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.08
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.08
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.94
146 TestFunctional/parallel/MountCmd/specific-port 1.96
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.7
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
149 TestFunctional/parallel/ProfileCmd/profile_list 0.36
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
151 TestFunctional/parallel/ServiceCmd/DeployApp 12.23
152 TestFunctional/parallel/ServiceCmd/List 1.69
153 TestFunctional/parallel/ServiceCmd/JSONOutput 1.68
154 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
155 TestFunctional/parallel/ServiceCmd/Format 0.51
156 TestFunctional/parallel/ServiceCmd/URL 0.56
157 TestFunctional/delete_addon-resizer_images 0.13
158 TestFunctional/delete_my-image_image 0.02
159 TestFunctional/delete_minikube_cached_images 0.02
163 TestIngressAddonLegacy/StartLegacyK8sCluster 76.6
165 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.39
166 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.55
170 TestJSONOutput/start/Command 68.86
171 TestJSONOutput/start/Audit 0
173 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/pause/Command 0.64
177 TestJSONOutput/pause/Audit 0
179 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/unpause/Command 0.59
183 TestJSONOutput/unpause/Audit 0
185 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/stop/Command 5.74
189 TestJSONOutput/stop/Audit 0
191 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
193 TestErrorJSONOutput 0.23
195 TestKicCustomNetwork/create_custom_network 41.54
196 TestKicCustomNetwork/use_default_bridge_network 26.53
197 TestKicExistingNetwork 25.01
198 TestKicCustomSubnet 23.86
199 TestKicStaticIP 27.31
200 TestMainNoArgs 0.06
201 TestMinikubeProfile 50.38
204 TestMountStart/serial/StartWithMountFirst 8.49
205 TestMountStart/serial/VerifyMountFirst 0.25
206 TestMountStart/serial/StartWithMountSecond 5.83
207 TestMountStart/serial/VerifyMountSecond 0.25
208 TestMountStart/serial/DeleteFirst 1.62
209 TestMountStart/serial/VerifyMountPostDelete 0.25
210 TestMountStart/serial/Stop 1.22
211 TestMountStart/serial/RestartStopped 7.72
212 TestMountStart/serial/VerifyMountPostStop 0.25
215 TestMultiNode/serial/FreshStart2Nodes 117.49
216 TestMultiNode/serial/DeployApp2Nodes 5.34
218 TestMultiNode/serial/AddNode 18.18
219 TestMultiNode/serial/MultiNodeLabels 0.06
220 TestMultiNode/serial/ProfileList 0.28
221 TestMultiNode/serial/CopyFile 8.92
222 TestMultiNode/serial/StopNode 2.1
223 TestMultiNode/serial/StartAfterStop 10.5
224 TestMultiNode/serial/RestartKeepsNodes 112.74
225 TestMultiNode/serial/DeleteNode 4.66
226 TestMultiNode/serial/StopMultiNode 23.84
227 TestMultiNode/serial/RestartMultiNode 80.45
228 TestMultiNode/serial/ValidateNameConflict 25.86
233 TestPreload 141.67
235 TestScheduledStopUnix 100.81
238 TestInsufficientStorage 13.29
241 TestKubernetesUpgrade 358.82
242 TestMissingContainerUpgrade 134.43
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
245 TestStoppedBinaryUpgrade/Setup 2.05
246 TestNoKubernetes/serial/StartWithK8s 33.35
248 TestNoKubernetes/serial/StartWithStopK8s 7.35
249 TestNoKubernetes/serial/Start 5.54
250 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
251 TestNoKubernetes/serial/ProfileList 1.47
252 TestNoKubernetes/serial/Stop 1.24
253 TestNoKubernetes/serial/StartNoArgs 8.39
254 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
262 TestStoppedBinaryUpgrade/MinikubeLogs 0.53
270 TestNetworkPlugins/group/false 4.82
275 TestPause/serial/Start 44.91
276 TestPause/serial/SecondStartNoReconfiguration 41.65
277 TestPause/serial/Pause 0.71
278 TestPause/serial/VerifyStatus 0.32
279 TestPause/serial/Unpause 0.67
280 TestPause/serial/PauseAgain 0.76
281 TestPause/serial/DeletePaused 2.71
282 TestPause/serial/VerifyDeletedResources 3.2
284 TestStartStop/group/old-k8s-version/serial/FirstStart 120.98
286 TestStartStop/group/embed-certs/serial/FirstStart 40.89
287 TestStartStop/group/embed-certs/serial/DeployApp 10.29
288 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.94
289 TestStartStop/group/embed-certs/serial/Stop 12.02
290 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
291 TestStartStop/group/embed-certs/serial/SecondStart 334.68
292 TestStartStop/group/old-k8s-version/serial/DeployApp 10.41
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.74
294 TestStartStop/group/old-k8s-version/serial/Stop 12
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
296 TestStartStop/group/old-k8s-version/serial/SecondStart 428.27
298 TestStartStop/group/no-preload/serial/FirstStart 54.87
300 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 43.97
301 TestStartStop/group/no-preload/serial/DeployApp 9.31
302 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.96
303 TestStartStop/group/no-preload/serial/Stop 11.97
304 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.26
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
306 TestStartStop/group/no-preload/serial/SecondStart 343.14
307 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.91
308 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.99
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
310 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 335.17
311 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 14.01
312 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
313 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
314 TestStartStop/group/embed-certs/serial/Pause 2.62
316 TestStartStop/group/newest-cni/serial/FirstStart 35.19
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.96
319 TestStartStop/group/newest-cni/serial/Stop 1.22
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
321 TestStartStop/group/newest-cni/serial/SecondStart 26.16
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
325 TestStartStop/group/newest-cni/serial/Pause 2.53
326 TestNetworkPlugins/group/auto/Start 41.31
327 TestNetworkPlugins/group/auto/KubeletFlags 0.26
328 TestNetworkPlugins/group/auto/NetCatPod 9.17
329 TestNetworkPlugins/group/auto/DNS 0.17
330 TestNetworkPlugins/group/auto/Localhost 0.13
331 TestNetworkPlugins/group/auto/HairPin 0.13
332 TestNetworkPlugins/group/kindnet/Start 72.86
333 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
334 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
335 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
336 TestStartStop/group/old-k8s-version/serial/Pause 3.29
337 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 17.01
338 TestNetworkPlugins/group/calico/Start 70.1
339 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.01
340 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
341 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
342 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
343 TestStartStop/group/no-preload/serial/Pause 2.93
344 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
345 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.14
346 TestNetworkPlugins/group/custom-flannel/Start 64.07
347 TestNetworkPlugins/group/enable-default-cni/Start 78.97
348 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
349 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
350 TestNetworkPlugins/group/kindnet/NetCatPod 10.17
351 TestNetworkPlugins/group/kindnet/DNS 0.21
352 TestNetworkPlugins/group/kindnet/Localhost 0.15
353 TestNetworkPlugins/group/kindnet/HairPin 0.18
354 TestNetworkPlugins/group/calico/ControllerPod 6.01
355 TestNetworkPlugins/group/calico/KubeletFlags 0.3
356 TestNetworkPlugins/group/calico/NetCatPod 11.2
357 TestNetworkPlugins/group/calico/DNS 0.15
358 TestNetworkPlugins/group/calico/Localhost 0.12
359 TestNetworkPlugins/group/calico/HairPin 0.13
360 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
361 TestNetworkPlugins/group/flannel/Start 59.75
362 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.21
363 TestNetworkPlugins/group/custom-flannel/DNS 0.15
364 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
365 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
366 TestNetworkPlugins/group/bridge/Start 77.81
367 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
368 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.33
369 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
370 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
371 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
372 TestNetworkPlugins/group/flannel/ControllerPod 6.01
373 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
374 TestNetworkPlugins/group/flannel/NetCatPod 9.17
375 TestNetworkPlugins/group/flannel/DNS 0.16
376 TestNetworkPlugins/group/flannel/Localhost 0.13
377 TestNetworkPlugins/group/flannel/HairPin 0.12
378 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
379 TestNetworkPlugins/group/bridge/NetCatPod 10.18
380 TestNetworkPlugins/group/bridge/DNS 0.14
381 TestNetworkPlugins/group/bridge/Localhost 0.12
382 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.16.0/json-events (39.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-365804 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-365804 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (39.850706483s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (39.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-365804
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-365804: exit status 85 (71.397399ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-365804 | jenkins | v1.32.0 | 03 Jan 24 18:57 UTC |          |
	|         | -p download-only-365804        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 18:57:11
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 18:57:11.008752   15681 out.go:296] Setting OutFile to fd 1 ...
	I0103 18:57:11.009039   15681 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 18:57:11.009050   15681 out.go:309] Setting ErrFile to fd 2...
	I0103 18:57:11.009054   15681 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 18:57:11.009292   15681 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-8915/.minikube/bin
	W0103 18:57:11.009469   15681 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17885-8915/.minikube/config/config.json: open /home/jenkins/minikube-integration/17885-8915/.minikube/config/config.json: no such file or directory
	I0103 18:57:11.010105   15681 out.go:303] Setting JSON to true
	I0103 18:57:11.010996   15681 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":2377,"bootTime":1704305854,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 18:57:11.011063   15681 start.go:138] virtualization: kvm guest
	I0103 18:57:11.013769   15681 out.go:97] [download-only-365804] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 18:57:11.015437   15681 out.go:169] MINIKUBE_LOCATION=17885
	W0103 18:57:11.013879   15681 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17885-8915/.minikube/cache/preloaded-tarball: no such file or directory
	I0103 18:57:11.013920   15681 notify.go:220] Checking for updates...
	I0103 18:57:11.018456   15681 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 18:57:11.019996   15681 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17885-8915/kubeconfig
	I0103 18:57:11.021435   15681 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-8915/.minikube
	I0103 18:57:11.022897   15681 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0103 18:57:11.025433   15681 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0103 18:57:11.025710   15681 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 18:57:11.046426   15681 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 18:57:11.046523   15681 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 18:57:11.399857   15681 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-03 18:57:11.391716088 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0103 18:57:11.399983   15681 docker.go:295] overlay module found
	I0103 18:57:11.402468   15681 out.go:97] Using the docker driver based on user configuration
	I0103 18:57:11.402500   15681 start.go:298] selected driver: docker
	I0103 18:57:11.402508   15681 start.go:902] validating driver "docker" against <nil>
	I0103 18:57:11.402584   15681 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 18:57:11.457942   15681 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-03 18:57:11.450058269 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0103 18:57:11.458089   15681 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0103 18:57:11.458613   15681 start_flags.go:394] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0103 18:57:11.458763   15681 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0103 18:57:11.460818   15681 out.go:169] Using Docker driver with root privileges
	I0103 18:57:11.462324   15681 cni.go:84] Creating CNI manager for ""
	I0103 18:57:11.462350   15681 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 18:57:11.462360   15681 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0103 18:57:11.462379   15681 start_flags.go:323] config:
	{Name:download-only-365804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-365804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 18:57:11.463858   15681 out.go:97] Starting control plane node download-only-365804 in cluster download-only-365804
	I0103 18:57:11.463883   15681 cache.go:121] Beginning downloading kic base image for docker with crio
	I0103 18:57:11.465141   15681 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I0103 18:57:11.465167   15681 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0103 18:57:11.465216   15681 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 18:57:11.480672   15681 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0103 18:57:11.480862   15681 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I0103 18:57:11.480974   15681 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0103 18:57:11.583562   15681 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0103 18:57:11.583595   15681 cache.go:56] Caching tarball of preloaded images
	I0103 18:57:11.583738   15681 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0103 18:57:11.585811   15681 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0103 18:57:11.585835   15681 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0103 18:57:11.695009   15681 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17885-8915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0103 18:57:24.715290   15681 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I0103 18:57:29.730460   15681 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0103 18:57:29.730550   15681 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17885-8915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0103 18:57:30.628237   15681 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0103 18:57:30.628547   15681 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/download-only-365804/config.json ...
	I0103 18:57:30.628573   15681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/download-only-365804/config.json: {Name:mk7a50522312505062eb6446ba4ca15f68ff2a79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 18:57:30.629247   15681 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0103 18:57:30.629435   15681 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17885-8915/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-365804"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (43.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-365804 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-365804 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (43.586235408s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (43.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-365804
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-365804: exit status 85 (72.826029ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-365804 | jenkins | v1.32.0 | 03 Jan 24 18:57 UTC |          |
	|         | -p download-only-365804        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-365804 | jenkins | v1.32.0 | 03 Jan 24 18:57 UTC |          |
	|         | -p download-only-365804        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 18:57:50
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 18:57:50.933865   15908 out.go:296] Setting OutFile to fd 1 ...
	I0103 18:57:50.933977   15908 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 18:57:50.933987   15908 out.go:309] Setting ErrFile to fd 2...
	I0103 18:57:50.933992   15908 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 18:57:50.934223   15908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-8915/.minikube/bin
	W0103 18:57:50.934363   15908 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17885-8915/.minikube/config/config.json: open /home/jenkins/minikube-integration/17885-8915/.minikube/config/config.json: no such file or directory
	I0103 18:57:50.934827   15908 out.go:303] Setting JSON to true
	I0103 18:57:50.935650   15908 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":2417,"bootTime":1704305854,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 18:57:50.935712   15908 start.go:138] virtualization: kvm guest
	I0103 18:57:50.938167   15908 out.go:97] [download-only-365804] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 18:57:50.939845   15908 out.go:169] MINIKUBE_LOCATION=17885
	I0103 18:57:50.938302   15908 notify.go:220] Checking for updates...
	I0103 18:57:50.942696   15908 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 18:57:50.944281   15908 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17885-8915/kubeconfig
	I0103 18:57:50.945733   15908 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-8915/.minikube
	I0103 18:57:50.947119   15908 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0103 18:57:50.949481   15908 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0103 18:57:50.949914   15908 config.go:182] Loaded profile config "download-only-365804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0103 18:57:50.949957   15908 start.go:810] api.Load failed for download-only-365804: filestore "download-only-365804": Docker machine "download-only-365804" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0103 18:57:50.950042   15908 driver.go:392] Setting default libvirt URI to qemu:///system
	W0103 18:57:50.950069   15908 start.go:810] api.Load failed for download-only-365804: filestore "download-only-365804": Docker machine "download-only-365804" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0103 18:57:50.972345   15908 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 18:57:50.972460   15908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 18:57:51.022505   15908 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2024-01-03 18:57:51.014542612 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0103 18:57:51.022634   15908 docker.go:295] overlay module found
	I0103 18:57:51.024614   15908 out.go:97] Using the docker driver based on existing profile
	I0103 18:57:51.024636   15908 start.go:298] selected driver: docker
	I0103 18:57:51.024640   15908 start.go:902] validating driver "docker" against &{Name:download-only-365804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-365804 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 18:57:51.024786   15908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 18:57:51.074871   15908 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2024-01-03 18:57:51.066891406 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0103 18:57:51.075513   15908 cni.go:84] Creating CNI manager for ""
	I0103 18:57:51.075531   15908 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 18:57:51.075542   15908 start_flags.go:323] config:
	{Name:download-only-365804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-365804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I0103 18:57:51.077569   15908 out.go:97] Starting control plane node download-only-365804 in cluster download-only-365804
	I0103 18:57:51.077590   15908 cache.go:121] Beginning downloading kic base image for docker with crio
	I0103 18:57:51.079182   15908 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I0103 18:57:51.079205   15908 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 18:57:51.079258   15908 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 18:57:51.093732   15908 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0103 18:57:51.093875   15908 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I0103 18:57:51.093897   15908 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory, skipping pull
	I0103 18:57:51.093902   15908 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in cache, skipping pull
	I0103 18:57:51.093913   15908 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I0103 18:57:51.508893   15908 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0103 18:57:51.508940   15908 cache.go:56] Caching tarball of preloaded images
	I0103 18:57:51.509108   15908 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 18:57:51.511732   15908 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0103 18:57:51.511751   15908 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0103 18:57:51.621093   15908 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/17885-8915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0103 18:58:05.181882   15908 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0103 18:58:05.181976   15908 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17885-8915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0103 18:58:06.122115   15908 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0103 18:58:06.122265   15908 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/download-only-365804/config.json ...
	I0103 18:58:06.122484   15908 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 18:58:06.122701   15908 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17885-8915/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-365804"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (42.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-365804 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-365804 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (42.560662932s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (42.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-365804
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-365804: exit status 85 (73.164463ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-365804 | jenkins | v1.32.0 | 03 Jan 24 18:57 UTC |          |
	|         | -p download-only-365804           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-365804 | jenkins | v1.32.0 | 03 Jan 24 18:57 UTC |          |
	|         | -p download-only-365804           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-365804 | jenkins | v1.32.0 | 03 Jan 24 18:58 UTC |          |
	|         | -p download-only-365804           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 18:58:34
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 18:58:34.594400   16142 out.go:296] Setting OutFile to fd 1 ...
	I0103 18:58:34.594528   16142 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 18:58:34.594539   16142 out.go:309] Setting ErrFile to fd 2...
	I0103 18:58:34.594547   16142 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 18:58:34.594763   16142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-8915/.minikube/bin
	W0103 18:58:34.594884   16142 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17885-8915/.minikube/config/config.json: open /home/jenkins/minikube-integration/17885-8915/.minikube/config/config.json: no such file or directory
	I0103 18:58:34.595303   16142 out.go:303] Setting JSON to true
	I0103 18:58:34.596069   16142 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":2461,"bootTime":1704305854,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 18:58:34.596136   16142 start.go:138] virtualization: kvm guest
	I0103 18:58:34.598480   16142 out.go:97] [download-only-365804] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 18:58:34.600339   16142 out.go:169] MINIKUBE_LOCATION=17885
	I0103 18:58:34.598689   16142 notify.go:220] Checking for updates...
	I0103 18:58:34.603401   16142 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 18:58:34.604894   16142 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17885-8915/kubeconfig
	I0103 18:58:34.606474   16142 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-8915/.minikube
	I0103 18:58:34.607824   16142 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0103 18:58:34.610212   16142 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0103 18:58:34.610657   16142 config.go:182] Loaded profile config "download-only-365804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W0103 18:58:34.610694   16142 start.go:810] api.Load failed for download-only-365804: filestore "download-only-365804": Docker machine "download-only-365804" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0103 18:58:34.610771   16142 driver.go:392] Setting default libvirt URI to qemu:///system
	W0103 18:58:34.610800   16142 start.go:810] api.Load failed for download-only-365804: filestore "download-only-365804": Docker machine "download-only-365804" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0103 18:58:34.634219   16142 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 18:58:34.634300   16142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 18:58:34.684834   16142 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2024-01-03 18:58:34.676714217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0103 18:58:34.684934   16142 docker.go:295] overlay module found
	I0103 18:58:34.686928   16142 out.go:97] Using the docker driver based on existing profile
	I0103 18:58:34.686952   16142 start.go:298] selected driver: docker
	I0103 18:58:34.686959   16142 start.go:902] validating driver "docker" against &{Name:download-only-365804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-365804 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 18:58:34.687116   16142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 18:58:34.739404   16142 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2024-01-03 18:58:34.731811574 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0103 18:58:34.740437   16142 cni.go:84] Creating CNI manager for ""
	I0103 18:58:34.740473   16142 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 18:58:34.740499   16142 start_flags.go:323] config:
	{Name:download-only-365804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-365804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0
s GPUs:}
	I0103 18:58:34.742690   16142 out.go:97] Starting control plane node download-only-365804 in cluster download-only-365804
	I0103 18:58:34.742715   16142 cache.go:121] Beginning downloading kic base image for docker with crio
	I0103 18:58:34.744174   16142 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I0103 18:58:34.744201   16142 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0103 18:58:34.744304   16142 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 18:58:34.760123   16142 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0103 18:58:34.760235   16142 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I0103 18:58:34.760249   16142 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory, skipping pull
	I0103 18:58:34.760253   16142 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in cache, skipping pull
	I0103 18:58:34.760263   16142 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I0103 18:58:34.849344   16142 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0103 18:58:34.849377   16142 cache.go:56] Caching tarball of preloaded images
	I0103 18:58:34.849534   16142 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0103 18:58:34.851581   16142 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0103 18:58:34.851598   16142 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0103 18:58:34.961683   16142 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:2e182f4d7475b49e22eaf15ea22c281b -> /home/jenkins/minikube-integration/17885-8915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0103 18:58:47.755696   16142 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0103 18:58:47.755781   16142 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17885-8915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0103 18:58:48.572867   16142 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0103 18:58:48.573008   16142 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/download-only-365804/config.json ...
	I0103 18:58:48.573217   16142 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0103 18:58:48.573399   16142 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17885-8915/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-365804"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-365804
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.29s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-079803 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-079803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-079803
--- PASS: TestDownloadOnlyKic (1.29s)

                                                
                                    
x
+
TestBinaryMirror (0.73s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-757574 --alsologtostderr --binary-mirror http://127.0.0.1:34341 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-757574" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-757574
--- PASS: TestBinaryMirror (0.73s)

                                                
                                    
x
+
TestOffline (88.53s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-205338 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-205338 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m21.763908736s)
helpers_test.go:175: Cleaning up "offline-crio-205338" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-205338
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-205338: (6.76325028s)
--- PASS: TestOffline (88.53s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-173367
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-173367: exit status 85 (61.494114ms)

                                                
                                                
-- stdout --
	* Profile "addons-173367" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-173367"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-173367
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-173367: exit status 85 (62.904053ms)

                                                
                                                
-- stdout --
	* Profile "addons-173367" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-173367"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (151.18s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-173367 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-173367 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m31.17732649s)
--- PASS: TestAddons/Setup (151.18s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 14.315107ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5mjnb" [0d2aeb06-5a71-450e-9d65-4d92104b10a9] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004177826s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xvslp" [86efa070-d422-44ef-85d4-80914a2c61d4] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003683789s
addons_test.go:340: (dbg) Run:  kubectl --context addons-173367 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-173367 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-173367 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.694146724s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-173367 ip
2024/01/03 19:02:06 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-173367 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.51s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.66s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-nwhr9" [a453a1ac-bf61-4421-afe6-8abc3fb3f226] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004088377s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-173367
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-173367: (5.652561272s)
--- PASS: TestAddons/parallel/InspektorGadget (10.66s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 2.797772ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-2gr28" [1bcec89a-19d0-41ff-8305-2b37e1646fad] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004389185s
addons_test.go:415: (dbg) Run:  kubectl --context addons-173367 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-173367 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.60s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.23s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 11.602483ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-8jgcn" [8e17fd14-1eb4-4234-99cd-b179d7fae114] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004190726s
addons_test.go:473: (dbg) Run:  kubectl --context addons-173367 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-173367 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.70873988s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-173367 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.23s)

                                                
                                    
x
+
TestAddons/parallel/CSI (65.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 16.04894ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-173367 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-173367 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [affef49d-8270-4255-8d0e-4490e4599f31] Pending
helpers_test.go:344: "task-pv-pod" [affef49d-8270-4255-8d0e-4490e4599f31] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [affef49d-8270-4255-8d0e-4490e4599f31] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003861168s
addons_test.go:584: (dbg) Run:  kubectl --context addons-173367 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-173367 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-173367 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-173367 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-173367 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-173367 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-173367 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [35774c85-c180-46c3-b04c-ce8c53483bde] Pending
helpers_test.go:344: "task-pv-pod-restore" [35774c85-c180-46c3-b04c-ce8c53483bde] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [35774c85-c180-46c3-b04c-ce8c53483bde] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003534046s
addons_test.go:626: (dbg) Run:  kubectl --context addons-173367 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-173367 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-173367 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-173367 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-173367 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.550239998s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-173367 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (65.55s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-173367 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-173367 --alsologtostderr -v=1: (1.734054404s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-mrwcb" [b17e794a-e28d-4110-8ea4-2c23528e8048] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-mrwcb" [b17e794a-e28d-4110-8ea4-2c23528e8048] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.003376044s
--- PASS: TestAddons/parallel/Headlamp (18.74s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-ttqb6" [4de81cf0-7af6-47e4-adef-19bab2a9a92e] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003611886s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-173367
--- PASS: TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.03s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-173367 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-173367 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-173367 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ab77e7b4-7993-44eb-b90b-a9871b302da7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ab77e7b4-7993-44eb-b90b-a9871b302da7] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ab77e7b4-7993-44eb-b90b-a9871b302da7] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003145899s
addons_test.go:891: (dbg) Run:  kubectl --context addons-173367 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-173367 ssh "cat /opt/local-path-provisioner/pvc-2c4082d9-6259-471c-9c2c-8d8a577bcbfb_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-173367 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-173367 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-173367 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-173367 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.035446294s)
--- PASS: TestAddons/parallel/LocalPath (58.03s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-txfsz" [3f385b33-2c9b-4e02-af46-d4993e55fec5] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00601943s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-173367
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.62s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-4frgl" [688b5962-a3da-4e0a-8671-c368f33a6cd3] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003836778s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-173367 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-173367 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.16s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-173367
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-173367: (11.884769193s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-173367
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-173367
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-173367
--- PASS: TestAddons/StoppedEnableDisable (12.16s)

                                                
                                    
x
+
TestCertOptions (29.61s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-868954 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-868954 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.539427892s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-868954 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-868954 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-868954 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-868954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-868954
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-868954: (3.411454976s)
--- PASS: TestCertOptions (29.61s)

                                                
                                    
x
+
TestCertExpiration (227.36s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-415444 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-415444 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.028621919s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-415444 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-415444 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (17.930561732s)
helpers_test.go:175: Cleaning up "cert-expiration-415444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-415444
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-415444: (2.399288043s)
--- PASS: TestCertExpiration (227.36s)

                                                
                                    
x
+
TestForceSystemdFlag (26.91s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-847347 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0103 19:33:12.543967   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/functional-436252/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-847347 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.074856845s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-847347 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-847347" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-847347
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-847347: (2.569096098s)
--- PASS: TestForceSystemdFlag (26.91s)

                                                
                                    
x
+
TestForceSystemdEnv (40.62s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-273501 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-273501 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.076697337s)
helpers_test.go:175: Cleaning up "force-systemd-env-273501" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-273501
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-273501: (3.54189134s)
--- PASS: TestForceSystemdEnv (40.62s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.84s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.84s)

                                                
                                    
x
+
TestErrorSpam/setup (20.7s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-009164 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-009164 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-009164 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-009164 --driver=docker  --container-runtime=crio: (20.696904696s)
--- PASS: TestErrorSpam/setup (20.70s)

                                                
                                    
x
+
TestErrorSpam/start (0.61s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-009164 --log_dir /tmp/nospam-009164 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-009164 --log_dir /tmp/nospam-009164 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-009164 --log_dir /tmp/nospam-009164 start --dry-run
--- PASS: TestErrorSpam/start (0.61s)

                                                
                                    
x
+
TestErrorSpam/status (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-009164 --log_dir /tmp/nospam-009164 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-009164 --log_dir /tmp/nospam-009164 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-009164 --log_dir /tmp/nospam-009164 status
--- PASS: TestErrorSpam/status (0.85s)

                                                
                                    
x
+
TestErrorSpam/pause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-009164 --log_dir /tmp/nospam-009164 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-009164 --log_dir /tmp/nospam-009164 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-009164 --log_dir /tmp/nospam-009164 pause
--- PASS: TestErrorSpam/pause (1.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-009164 --log_dir /tmp/nospam-009164 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-009164 --log_dir /tmp/nospam-009164 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-009164 --log_dir /tmp/nospam-009164 unpause
--- PASS: TestErrorSpam/unpause (1.47s)

                                                
                                    
x
+
TestErrorSpam/stop (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-009164 --log_dir /tmp/nospam-009164 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-009164 --log_dir /tmp/nospam-009164 stop: (1.199128375s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-009164 --log_dir /tmp/nospam-009164 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-009164 --log_dir /tmp/nospam-009164 stop
--- PASS: TestErrorSpam/stop (1.40s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17885-8915/.minikube/files/etc/test/nested/copy/15670/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (65.21s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-436252 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0103 19:06:50.908421   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
E0103 19:06:50.914106   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
E0103 19:06:50.924443   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
E0103 19:06:50.944752   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
E0103 19:06:50.985039   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
E0103 19:06:51.065366   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
E0103 19:06:51.225782   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
E0103 19:06:51.546363   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
E0103 19:06:52.187330   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
E0103 19:06:53.467548   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-436252 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m5.205780213s)
--- PASS: TestFunctional/serial/StartWithProxy (65.21s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.28s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-436252 --alsologtostderr -v=8
E0103 19:06:56.028198   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
E0103 19:07:01.149035   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
E0103 19:07:11.390046   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-436252 --alsologtostderr -v=8: (29.283233935s)
functional_test.go:659: soft start took 29.283974709s for "functional-436252" cluster.
--- PASS: TestFunctional/serial/SoftStart (29.28s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-436252 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-436252 /tmp/TestFunctionalserialCacheCmdcacheadd_local1537291150/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 cache add minikube-local-cache-test:functional-436252
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-436252 cache add minikube-local-cache-test:functional-436252: (1.598340615s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 cache delete minikube-local-cache-test:functional-436252
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-436252
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-436252 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (268.378249ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 kubectl -- --context functional-436252 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-436252 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.49s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-436252 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0103 19:07:31.870240   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-436252 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.493585652s)
functional_test.go:757: restart took 33.494051495s for "functional-436252" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.49s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-436252 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-436252 logs: (1.325397894s)
--- PASS: TestFunctional/serial/LogsCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 logs --file /tmp/TestFunctionalserialLogsFileCmd1542880394/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-436252 logs --file /tmp/TestFunctionalserialLogsFileCmd1542880394/001/logs.txt: (1.338503843s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.26s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-436252 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-436252
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-436252: exit status 115 (337.697887ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30550 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-436252 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-436252 config get cpus: exit status 14 (84.204616ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-436252 config get cpus: exit status 14 (73.424159ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (22.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-436252 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-436252 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 51315: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (22.21s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-436252 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-436252 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (178.107643ms)

                                                
                                                
-- stdout --
	* [functional-436252] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-8915/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-8915/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 19:08:35.958382   50452 out.go:296] Setting OutFile to fd 1 ...
	I0103 19:08:35.958658   50452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:08:35.958668   50452 out.go:309] Setting ErrFile to fd 2...
	I0103 19:08:35.958676   50452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:08:35.958899   50452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-8915/.minikube/bin
	I0103 19:08:35.959449   50452 out.go:303] Setting JSON to false
	I0103 19:08:35.960401   50452 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3062,"bootTime":1704305854,"procs":269,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 19:08:35.960459   50452 start.go:138] virtualization: kvm guest
	I0103 19:08:35.962820   50452 out.go:177] * [functional-436252] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 19:08:35.964256   50452 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 19:08:35.964334   50452 notify.go:220] Checking for updates...
	I0103 19:08:35.965732   50452 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 19:08:35.967353   50452 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-8915/kubeconfig
	I0103 19:08:35.968828   50452 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-8915/.minikube
	I0103 19:08:35.970214   50452 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 19:08:35.971530   50452 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 19:08:35.973415   50452 config.go:182] Loaded profile config "functional-436252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:08:35.974085   50452 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 19:08:36.006624   50452 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 19:08:36.006724   50452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 19:08:36.067147   50452 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:49 SystemTime:2024-01-03 19:08:36.055797883 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0103 19:08:36.067232   50452 docker.go:295] overlay module found
	I0103 19:08:36.069313   50452 out.go:177] * Using the docker driver based on existing profile
	I0103 19:08:36.070674   50452 start.go:298] selected driver: docker
	I0103 19:08:36.070687   50452 start.go:902] validating driver "docker" against &{Name:functional-436252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-436252 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:08:36.070769   50452 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 19:08:36.073238   50452 out.go:177] 
	W0103 19:08:36.074529   50452 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0103 19:08:36.075910   50452 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-436252 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-436252 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-436252 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (186.619374ms)

                                                
                                                
-- stdout --
	* [functional-436252] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-8915/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-8915/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 19:08:35.771845   50355 out.go:296] Setting OutFile to fd 1 ...
	I0103 19:08:35.771962   50355 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:08:35.771971   50355 out.go:309] Setting ErrFile to fd 2...
	I0103 19:08:35.771975   50355 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:08:35.772270   50355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-8915/.minikube/bin
	I0103 19:08:35.772787   50355 out.go:303] Setting JSON to false
	I0103 19:08:35.773739   50355 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3062,"bootTime":1704305854,"procs":271,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 19:08:35.773795   50355 start.go:138] virtualization: kvm guest
	I0103 19:08:35.778223   50355 out.go:177] * [functional-436252] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0103 19:08:35.779952   50355 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 19:08:35.779955   50355 notify.go:220] Checking for updates...
	I0103 19:08:35.783261   50355 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 19:08:35.784864   50355 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-8915/kubeconfig
	I0103 19:08:35.786624   50355 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-8915/.minikube
	I0103 19:08:35.790168   50355 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 19:08:35.791726   50355 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 19:08:35.793969   50355 config.go:182] Loaded profile config "functional-436252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:08:35.794537   50355 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 19:08:35.824629   50355 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 19:08:35.824758   50355 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 19:08:35.888182   50355 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:49 SystemTime:2024-01-03 19:08:35.875854197 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0103 19:08:35.888312   50355 docker.go:295] overlay module found
	I0103 19:08:35.891093   50355 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0103 19:08:35.892424   50355 start.go:298] selected driver: docker
	I0103 19:08:35.892437   50355 start.go:902] validating driver "docker" against &{Name:functional-436252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-436252 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:08:35.892548   50355 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 19:08:35.894590   50355 out.go:177] 
	W0103 19:08:35.895921   50355 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0103 19:08:35.897148   50355 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (18.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-436252 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-436252 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-fscs4" [20e8a7f9-cd34-42eb-a462-5f5d33303956] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-fscs4" [20e8a7f9-cd34-42eb-a462-5f5d33303956] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 18.003880201s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30123
functional_test.go:1674: http://192.168.49.2:30123: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-fscs4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30123
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (18.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [63714b9b-461f-456b-ad14-dcb338f6a965] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.045016225s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-436252 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-436252 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-436252 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-436252 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e9bac5a5-2b9d-4a87-9be1-2f2ef59db69e] Pending
helpers_test.go:344: "sp-pod" [e9bac5a5-2b9d-4a87-9be1-2f2ef59db69e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e9bac5a5-2b9d-4a87-9be1-2f2ef59db69e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.00397467s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-436252 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-436252 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-436252 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a14fb190-5e50-463b-877e-81645fce3c08] Pending
helpers_test.go:344: "sp-pod" [a14fb190-5e50-463b-877e-81645fce3c08] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a14fb190-5e50-463b-877e-81645fce3c08] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004108523s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-436252 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.46s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh -n functional-436252 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 cp functional-436252:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3224674519/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh -n functional-436252 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh -n functional-436252 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-436252 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-sc42p" [18fb5f2b-bdda-4829-be9f-744f3152f99a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-sc42p" [18fb5f2b-bdda-4829-be9f-744f3152f99a] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.003766064s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-436252 exec mysql-859648c796-sc42p -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-436252 exec mysql-859648c796-sc42p -- mysql -ppassword -e "show databases;": exit status 1 (138.5806ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-436252 exec mysql-859648c796-sc42p -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-436252 exec mysql-859648c796-sc42p -- mysql -ppassword -e "show databases;": exit status 1 (133.656208ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-436252 exec mysql-859648c796-sc42p -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.13s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/15670/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh "sudo cat /etc/test/nested/copy/15670/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/15670.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh "sudo cat /etc/ssl/certs/15670.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/15670.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh "sudo cat /usr/share/ca-certificates/15670.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/156702.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh "sudo cat /etc/ssl/certs/156702.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/156702.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh "sudo cat /usr/share/ca-certificates/156702.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-436252 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-436252 ssh "sudo systemctl is-active docker": exit status 1 (316.0843ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-436252 ssh "sudo systemctl is-active containerd": exit status 1 (298.553368ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-436252 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-436252
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-436252 image ls --format short --alsologtostderr:
I0103 19:08:59.538572   54510 out.go:296] Setting OutFile to fd 1 ...
I0103 19:08:59.538842   54510 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 19:08:59.538852   54510 out.go:309] Setting ErrFile to fd 2...
I0103 19:08:59.538857   54510 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 19:08:59.539103   54510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-8915/.minikube/bin
I0103 19:08:59.539705   54510 config.go:182] Loaded profile config "functional-436252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 19:08:59.539822   54510 config.go:182] Loaded profile config "functional-436252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 19:08:59.540272   54510 cli_runner.go:164] Run: docker container inspect functional-436252 --format={{.State.Status}}
I0103 19:08:59.556373   54510 ssh_runner.go:195] Run: systemctl --version
I0103 19:08:59.556436   54510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-436252
I0103 19:08:59.571927   54510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/functional-436252/id_rsa Username:docker}
I0103 19:08:59.654513   54510 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-436252 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| docker.io/library/nginx                 | alpine             | 529b5644c430c | 44.4MB |
| docker.io/library/nginx                 | latest             | d453dd892d935 | 191MB  |
| gcr.io/google-containers/addon-resizer  | functional-436252  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-436252 image ls --format table --alsologtostderr:
I0103 19:09:00.143723   54794 out.go:296] Setting OutFile to fd 1 ...
I0103 19:09:00.143830   54794 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 19:09:00.143839   54794 out.go:309] Setting ErrFile to fd 2...
I0103 19:09:00.143844   54794 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 19:09:00.144038   54794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-8915/.minikube/bin
I0103 19:09:00.144639   54794 config.go:182] Loaded profile config "functional-436252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 19:09:00.144744   54794 config.go:182] Loaded profile config "functional-436252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 19:09:00.145152   54794 cli_runner.go:164] Run: docker container inspect functional-436252 --format={{.State.Status}}
I0103 19:09:00.161224   54794 ssh_runner.go:195] Run: systemctl --version
I0103 19:09:00.161279   54794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-436252
I0103 19:09:00.177595   54794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/functional-436252/id_rsa Username:docker}
I0103 19:09:00.262404   54794 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-436252 image ls --format json --alsologtostderr:
[{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-436252"],"size":"34114467"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/k
ube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff","repoDigests":["docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686","docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59"],"
repoTags":["docker.io/library/nginx:alpine"],"size":"44405005"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io
/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe1
0"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-api
server@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9","repoDigests":["docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026","docker.io/library/nginx@sha256:9784f7985f6fba493ba30fb68419f50484fee8faaf677216cb95826f8491d2e9"],"repoTags":["docker.io/library/nginx:latest"],"size":"190867606"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a87894
9031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-436252 image ls --format json --alsologtostderr:
I0103 19:08:59.927414   54689 out.go:296] Setting OutFile to fd 1 ...
I0103 19:08:59.927576   54689 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 19:08:59.927590   54689 out.go:309] Setting ErrFile to fd 2...
I0103 19:08:59.927599   54689 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 19:08:59.927820   54689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-8915/.minikube/bin
I0103 19:08:59.928404   54689 config.go:182] Loaded profile config "functional-436252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 19:08:59.928519   54689 config.go:182] Loaded profile config "functional-436252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 19:08:59.928907   54689 cli_runner.go:164] Run: docker container inspect functional-436252 --format={{.State.Status}}
I0103 19:08:59.945301   54689 ssh_runner.go:195] Run: systemctl --version
I0103 19:08:59.945353   54689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-436252
I0103 19:08:59.962659   54689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/functional-436252/id_rsa Username:docker}
I0103 19:09:00.050434   54689 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-436252 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9
repoDigests:
- docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026
- docker.io/library/nginx@sha256:9784f7985f6fba493ba30fb68419f50484fee8faaf677216cb95826f8491d2e9
repoTags:
- docker.io/library/nginx:latest
size: "190867606"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-436252
size: "34114467"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff
repoDigests:
- docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686
- docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59
repoTags:
- docker.io/library/nginx:alpine
size: "44405005"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-436252 image ls --format yaml --alsologtostderr:
I0103 19:08:59.702061   54551 out.go:296] Setting OutFile to fd 1 ...
I0103 19:08:59.702221   54551 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 19:08:59.702232   54551 out.go:309] Setting ErrFile to fd 2...
I0103 19:08:59.702236   54551 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 19:08:59.702425   54551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-8915/.minikube/bin
I0103 19:08:59.703005   54551 config.go:182] Loaded profile config "functional-436252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 19:08:59.703144   54551 config.go:182] Loaded profile config "functional-436252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 19:08:59.703650   54551 cli_runner.go:164] Run: docker container inspect functional-436252 --format={{.State.Status}}
I0103 19:08:59.721616   54551 ssh_runner.go:195] Run: systemctl --version
I0103 19:08:59.721668   54551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-436252
I0103 19:08:59.743257   54551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/functional-436252/id_rsa Username:docker}
I0103 19:08:59.830987   54551 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-436252 ssh pgrep buildkitd: exit status 1 (270.767481ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 image build -t localhost/my-image:functional-436252 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-436252 image build -t localhost/my-image:functional-436252 testdata/build --alsologtostderr: (2.622838509s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-436252 image build -t localhost/my-image:functional-436252 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 444853df003
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-436252
--> 26095731d50
Successfully tagged localhost/my-image:functional-436252
26095731d50a5d0805dff66ebbfc3884f3135c771911afb21eff411468cd9c3e
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-436252 image build -t localhost/my-image:functional-436252 testdata/build --alsologtostderr:
I0103 19:09:00.025823   54732 out.go:296] Setting OutFile to fd 1 ...
I0103 19:09:00.026170   54732 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 19:09:00.026186   54732 out.go:309] Setting ErrFile to fd 2...
I0103 19:09:00.026194   54732 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 19:09:00.026461   54732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-8915/.minikube/bin
I0103 19:09:00.027178   54732 config.go:182] Loaded profile config "functional-436252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 19:09:00.027858   54732 config.go:182] Loaded profile config "functional-436252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 19:09:00.028337   54732 cli_runner.go:164] Run: docker container inspect functional-436252 --format={{.State.Status}}
I0103 19:09:00.046651   54732 ssh_runner.go:195] Run: systemctl --version
I0103 19:09:00.046709   54732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-436252
I0103 19:09:00.084067   54732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/functional-436252/id_rsa Username:docker}
I0103 19:09:00.174635   54732 build_images.go:151] Building image from path: /tmp/build.1097954363.tar
I0103 19:09:00.174706   54732 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0103 19:09:00.182698   54732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1097954363.tar
I0103 19:09:00.185578   54732 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1097954363.tar: stat -c "%s %y" /var/lib/minikube/build/build.1097954363.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1097954363.tar': No such file or directory
I0103 19:09:00.185607   54732 ssh_runner.go:362] scp /tmp/build.1097954363.tar --> /var/lib/minikube/build/build.1097954363.tar (3072 bytes)
I0103 19:09:00.207104   54732 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1097954363
I0103 19:09:00.214679   54732 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1097954363 -xf /var/lib/minikube/build/build.1097954363.tar
I0103 19:09:00.222665   54732 crio.go:297] Building image: /var/lib/minikube/build/build.1097954363
I0103 19:09:00.222727   54732 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-436252 /var/lib/minikube/build/build.1097954363 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0103 19:09:02.569654   54732 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-436252 /var/lib/minikube/build/build.1097954363 --cgroup-manager=cgroupfs: (2.346899885s)
I0103 19:09:02.569718   54732 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1097954363
I0103 19:09:02.577861   54732 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1097954363.tar
I0103 19:09:02.585317   54732 build_images.go:207] Built localhost/my-image:functional-436252 from /tmp/build.1097954363.tar
I0103 19:09:02.585358   54732 build_images.go:123] succeeded building to: functional-436252
I0103 19:09:02.585364   54732 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.050111917s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-436252
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-436252 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-436252 tunnel --alsologtostderr]
E0103 19:08:12.831120   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-436252 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-436252 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 47685: os: process already finished
helpers_test.go:502: unable to terminate pid 47422: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-436252 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (22.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-436252 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [3011552a-590c-4e01-8d15-a3e3c9c70317] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [3011552a-590c-4e01-8d15-a3e3c9c70317] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 22.005925602s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (22.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (8.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 image load --daemon gcr.io/google-containers/addon-resizer:functional-436252 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-436252 image load --daemon gcr.io/google-containers/addon-resizer:functional-436252 --alsologtostderr: (7.815118912s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (8.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 image load --daemon gcr.io/google-containers/addon-resizer:functional-436252 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-436252 image load --daemon gcr.io/google-containers/addon-resizer:functional-436252 --alsologtostderr: (4.098432728s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.502216494s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-436252
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 image load --daemon gcr.io/google-containers/addon-resizer:functional-436252 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-436252 image load --daemon gcr.io/google-containers/addon-resizer:functional-436252 --alsologtostderr: (7.583667438s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-436252 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.201.139 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-436252 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-436252 /tmp/TestFunctionalparallelMountCmdany-port437644881/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1704308915416260303" to /tmp/TestFunctionalparallelMountCmdany-port437644881/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1704308915416260303" to /tmp/TestFunctionalparallelMountCmdany-port437644881/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1704308915416260303" to /tmp/TestFunctionalparallelMountCmdany-port437644881/001/test-1704308915416260303
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-436252 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (373.984651ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan  3 19:08 created-by-test
-rw-r--r-- 1 docker docker 24 Jan  3 19:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan  3 19:08 test-1704308915416260303
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh cat /mount-9p/test-1704308915416260303
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-436252 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ec89052b-4d20-4549-a38b-78ea4c4a69f7] Pending
helpers_test.go:344: "busybox-mount" [ec89052b-4d20-4549-a38b-78ea4c4a69f7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ec89052b-4d20-4549-a38b-78ea4c4a69f7] Running
helpers_test.go:344: "busybox-mount" [ec89052b-4d20-4549-a38b-78ea4c4a69f7] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ec89052b-4d20-4549-a38b-78ea4c4a69f7] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.00345876s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-436252 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-436252 /tmp/TestFunctionalparallelMountCmdany-port437644881/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 image save gcr.io/google-containers/addon-resizer:functional-436252 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-436252 image save gcr.io/google-containers/addon-resizer:functional-436252 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.078835863s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 image rm gcr.io/google-containers/addon-resizer:functional-436252 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-436252
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 image save --daemon gcr.io/google-containers/addon-resizer:functional-436252 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-436252
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-436252 /tmp/TestFunctionalparallelMountCmdspecific-port4200299610/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-436252 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (272.16223ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-436252 /tmp/TestFunctionalparallelMountCmdspecific-port4200299610/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-436252 ssh "sudo umount -f /mount-9p": exit status 1 (253.68216ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-436252 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-436252 /tmp/TestFunctionalparallelMountCmdspecific-port4200299610/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-436252 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3957375422/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-436252 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3957375422/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-436252 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3957375422/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-436252 ssh "findmnt -T" /mount1: exit status 1 (322.624103ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-436252 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-436252 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3957375422/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-436252 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3957375422/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-436252 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3957375422/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "293.032793ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "65.081664ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "307.808913ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "60.801086ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-436252 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-436252 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-cr2b4" [ecd0e0ba-d208-48a9-831a-e3a6213fd5b0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-cr2b4" [ecd0e0ba-d208-48a9-831a-e3a6213fd5b0] Running
2024/01/03 19:08:58 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.003693358s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 service list
functional_test.go:1458: (dbg) Done: out/minikube-linux-amd64 -p functional-436252 service list: (1.692001834s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-linux-amd64 -p functional-436252 service list -o json: (1.682536369s)
functional_test.go:1493: Took "1.682657495s" to run "out/minikube-linux-amd64 -p functional-436252 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31667
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-436252 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31667
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-436252
--- PASS: TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-436252
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-436252
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (76.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-547465 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0103 19:09:34.751562   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-547465 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m16.603814083s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (76.60s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.39s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-547465 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-547465 addons enable ingress --alsologtostderr -v=5: (14.389677167s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.39s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.55s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-547465 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (68.86s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-733995 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0103 19:13:53.507252   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/functional-436252/client.crt: no such file or directory
E0103 19:14:34.469236   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/functional-436252/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-733995 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m8.858999951s)
--- PASS: TestJSONOutput/start/Command (68.86s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-733995 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-733995 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.74s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-733995 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-733995 --output=json --user=testUser: (5.737516529s)
--- PASS: TestJSONOutput/stop/Command (5.74s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-402283 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-402283 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (83.524607ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a1b7aa6f-201c-465d-a6af-387c9a9c78a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-402283] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d30475db-881d-4bc5-b530-b15bf2cf26b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17885"}}
	{"specversion":"1.0","id":"b9541a17-a6cf-407c-be29-2a5aa0c570dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7a405822-6e31-485c-99a6-e1a6404d4422","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17885-8915/kubeconfig"}}
	{"specversion":"1.0","id":"6dbe7606-fb33-4758-80ec-aa486b58c68d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-8915/.minikube"}}
	{"specversion":"1.0","id":"9b1d7778-3f65-4b54-9010-1f6aa6f84d88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b9773c38-69b2-4f9e-b6bc-ba2d8cd57a98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"de57b787-ebbe-436e-aaa9-099059d90d83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-402283" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-402283
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.54s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-455342 --network=
E0103 19:15:41.654476   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt: no such file or directory
E0103 19:15:41.659778   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt: no such file or directory
E0103 19:15:41.670039   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt: no such file or directory
E0103 19:15:41.690293   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt: no such file or directory
E0103 19:15:41.730634   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt: no such file or directory
E0103 19:15:41.810964   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt: no such file or directory
E0103 19:15:41.971407   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt: no such file or directory
E0103 19:15:42.291971   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt: no such file or directory
E0103 19:15:42.933122   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt: no such file or directory
E0103 19:15:44.214287   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt: no such file or directory
E0103 19:15:46.775108   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-455342 --network=: (39.509070113s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-455342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-455342
E0103 19:15:51.896095   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-455342: (2.014884839s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.54s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.53s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-061334 --network=bridge
E0103 19:15:56.390128   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/functional-436252/client.crt: no such file or directory
E0103 19:16:02.136547   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-061334 --network=bridge: (24.586413581s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-061334" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-061334
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-061334: (1.927622305s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.53s)

                                                
                                    
x
+
TestKicExistingNetwork (25.01s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-564467 --network=existing-network
E0103 19:16:22.617061   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-564467 --network=existing-network: (23.007704152s)
helpers_test.go:175: Cleaning up "existing-network-564467" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-564467
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-564467: (1.867294568s)
--- PASS: TestKicExistingNetwork (25.01s)

                                                
                                    
x
+
TestKicCustomSubnet (23.86s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-062643 --subnet=192.168.60.0/24
E0103 19:16:50.907650   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
E0103 19:17:03.577601   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-062643 --subnet=192.168.60.0/24: (21.809758129s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-062643 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-062643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-062643
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-062643: (2.028149683s)
--- PASS: TestKicCustomSubnet (23.86s)

                                                
                                    
x
+
TestKicStaticIP (27.31s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-577315 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-577315 --static-ip=192.168.200.200: (25.204650291s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-577315 ip
helpers_test.go:175: Cleaning up "static-ip-577315" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-577315
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-577315: (1.970700388s)
--- PASS: TestKicStaticIP (27.31s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (50.38s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-059657 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-059657 --driver=docker  --container-runtime=crio: (21.299450056s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-061892 --driver=docker  --container-runtime=crio
E0103 19:18:12.544715   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/functional-436252/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-061892 --driver=docker  --container-runtime=crio: (24.111668272s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-059657
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-061892
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-061892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-061892
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-061892: (1.831805345s)
helpers_test.go:175: Cleaning up "first-059657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-059657
E0103 19:18:25.498802   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-059657: (2.151934076s)
--- PASS: TestMinikubeProfile (50.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-657797 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-657797 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.487718194s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-657797 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-668281 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-668281 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.825678072s)
E0103 19:18:40.230582   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/functional-436252/client.crt: no such file or directory
--- PASS: TestMountStart/serial/StartWithMountSecond (5.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-668281 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-657797 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-657797 --alsologtostderr -v=5: (1.615349104s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-668281 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-668281
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-668281: (1.224061677s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.72s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-668281
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-668281: (6.721545545s)
--- PASS: TestMountStart/serial/RestartStopped (7.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-668281 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (117.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-867906 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0103 19:20:41.653866   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-867906 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m57.041460929s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (117.49s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-867906 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-867906 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-867906 -- rollout status deployment/busybox: (3.674228366s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-867906 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-867906 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-867906 -- exec busybox-5bc68d56bd-8j67l -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-867906 -- exec busybox-5bc68d56bd-nkg7x -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-867906 -- exec busybox-5bc68d56bd-8j67l -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-867906 -- exec busybox-5bc68d56bd-nkg7x -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-867906 -- exec busybox-5bc68d56bd-8j67l -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-867906 -- exec busybox-5bc68d56bd-nkg7x -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.34s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-867906 -v 3 --alsologtostderr
E0103 19:21:09.339589   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-867906 -v 3 --alsologtostderr: (17.607948528s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.18s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-867906 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 cp testdata/cp-test.txt multinode-867906:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 ssh -n multinode-867906 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 cp multinode-867906:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2174766070/001/cp-test_multinode-867906.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 ssh -n multinode-867906 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 cp multinode-867906:/home/docker/cp-test.txt multinode-867906-m02:/home/docker/cp-test_multinode-867906_multinode-867906-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 ssh -n multinode-867906 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 ssh -n multinode-867906-m02 "sudo cat /home/docker/cp-test_multinode-867906_multinode-867906-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 cp multinode-867906:/home/docker/cp-test.txt multinode-867906-m03:/home/docker/cp-test_multinode-867906_multinode-867906-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 ssh -n multinode-867906 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 ssh -n multinode-867906-m03 "sudo cat /home/docker/cp-test_multinode-867906_multinode-867906-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 cp testdata/cp-test.txt multinode-867906-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 ssh -n multinode-867906-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 cp multinode-867906-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2174766070/001/cp-test_multinode-867906-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 ssh -n multinode-867906-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 cp multinode-867906-m02:/home/docker/cp-test.txt multinode-867906:/home/docker/cp-test_multinode-867906-m02_multinode-867906.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 ssh -n multinode-867906-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 ssh -n multinode-867906 "sudo cat /home/docker/cp-test_multinode-867906-m02_multinode-867906.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 cp multinode-867906-m02:/home/docker/cp-test.txt multinode-867906-m03:/home/docker/cp-test_multinode-867906-m02_multinode-867906-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 ssh -n multinode-867906-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 ssh -n multinode-867906-m03 "sudo cat /home/docker/cp-test_multinode-867906-m02_multinode-867906-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 cp testdata/cp-test.txt multinode-867906-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 ssh -n multinode-867906-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 cp multinode-867906-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2174766070/001/cp-test_multinode-867906-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 ssh -n multinode-867906-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 cp multinode-867906-m03:/home/docker/cp-test.txt multinode-867906:/home/docker/cp-test_multinode-867906-m03_multinode-867906.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 ssh -n multinode-867906-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 ssh -n multinode-867906 "sudo cat /home/docker/cp-test_multinode-867906-m03_multinode-867906.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 cp multinode-867906-m03:/home/docker/cp-test.txt multinode-867906-m02:/home/docker/cp-test_multinode-867906-m03_multinode-867906-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 ssh -n multinode-867906-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 ssh -n multinode-867906-m02 "sudo cat /home/docker/cp-test_multinode-867906-m03_multinode-867906-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.92s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-867906 node stop m03: (1.208077322s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-867906 status: exit status 7 (447.423997ms)

                                                
                                                
-- stdout --
	multinode-867906
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-867906-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-867906-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-867906 status --alsologtostderr: exit status 7 (447.287552ms)

                                                
                                                
-- stdout --
	multinode-867906
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-867906-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-867906-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 19:21:29.262796  115848 out.go:296] Setting OutFile to fd 1 ...
	I0103 19:21:29.262948  115848 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:21:29.262958  115848 out.go:309] Setting ErrFile to fd 2...
	I0103 19:21:29.262965  115848 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:21:29.263154  115848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-8915/.minikube/bin
	I0103 19:21:29.263328  115848 out.go:303] Setting JSON to false
	I0103 19:21:29.263359  115848 mustload.go:65] Loading cluster: multinode-867906
	I0103 19:21:29.263438  115848 notify.go:220] Checking for updates...
	I0103 19:21:29.263776  115848 config.go:182] Loaded profile config "multinode-867906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:21:29.263794  115848 status.go:255] checking status of multinode-867906 ...
	I0103 19:21:29.264254  115848 cli_runner.go:164] Run: docker container inspect multinode-867906 --format={{.State.Status}}
	I0103 19:21:29.282097  115848 status.go:330] multinode-867906 host status = "Running" (err=<nil>)
	I0103 19:21:29.282127  115848 host.go:66] Checking if "multinode-867906" exists ...
	I0103 19:21:29.282467  115848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-867906
	I0103 19:21:29.298595  115848 host.go:66] Checking if "multinode-867906" exists ...
	I0103 19:21:29.298923  115848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 19:21:29.298968  115848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-867906
	I0103 19:21:29.314696  115848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/multinode-867906/id_rsa Username:docker}
	I0103 19:21:29.398904  115848 ssh_runner.go:195] Run: systemctl --version
	I0103 19:21:29.402943  115848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:21:29.412801  115848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 19:21:29.464117  115848 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:56 SystemTime:2024-01-03 19:21:29.454607939 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0103 19:21:29.464669  115848 kubeconfig.go:92] found "multinode-867906" server: "https://192.168.58.2:8443"
	I0103 19:21:29.464692  115848 api_server.go:166] Checking apiserver status ...
	I0103 19:21:29.464728  115848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 19:21:29.474397  115848 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1423/cgroup
	I0103 19:21:29.482400  115848 api_server.go:182] apiserver freezer: "11:freezer:/docker/16b24a361d347b18a39cdf8457d9f98ce75e719a544b5914bcfae22cab236bb4/crio/crio-8f59f5d77b69eab3e383874f7e0a29642f64d4dbfbc662d6abde2988fc95786a"
	I0103 19:21:29.482452  115848 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/16b24a361d347b18a39cdf8457d9f98ce75e719a544b5914bcfae22cab236bb4/crio/crio-8f59f5d77b69eab3e383874f7e0a29642f64d4dbfbc662d6abde2988fc95786a/freezer.state
	I0103 19:21:29.489698  115848 api_server.go:204] freezer state: "THAWED"
	I0103 19:21:29.489722  115848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0103 19:21:29.493690  115848 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0103 19:21:29.493709  115848 status.go:421] multinode-867906 apiserver status = Running (err=<nil>)
	I0103 19:21:29.493717  115848 status.go:257] multinode-867906 status: &{Name:multinode-867906 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0103 19:21:29.493735  115848 status.go:255] checking status of multinode-867906-m02 ...
	I0103 19:21:29.493957  115848 cli_runner.go:164] Run: docker container inspect multinode-867906-m02 --format={{.State.Status}}
	I0103 19:21:29.510407  115848 status.go:330] multinode-867906-m02 host status = "Running" (err=<nil>)
	I0103 19:21:29.510432  115848 host.go:66] Checking if "multinode-867906-m02" exists ...
	I0103 19:21:29.510725  115848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-867906-m02
	I0103 19:21:29.526410  115848 host.go:66] Checking if "multinode-867906-m02" exists ...
	I0103 19:21:29.526639  115848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 19:21:29.526683  115848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-867906-m02
	I0103 19:21:29.542703  115848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17885-8915/.minikube/machines/multinode-867906-m02/id_rsa Username:docker}
	I0103 19:21:29.626816  115848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:21:29.636731  115848 status.go:257] multinode-867906-m02 status: &{Name:multinode-867906-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0103 19:21:29.636761  115848 status.go:255] checking status of multinode-867906-m03 ...
	I0103 19:21:29.636984  115848 cli_runner.go:164] Run: docker container inspect multinode-867906-m03 --format={{.State.Status}}
	I0103 19:21:29.653415  115848 status.go:330] multinode-867906-m03 host status = "Stopped" (err=<nil>)
	I0103 19:21:29.653447  115848 status.go:343] host is not running, skipping remaining checks
	I0103 19:21:29.653453  115848 status.go:257] multinode-867906-m03 status: &{Name:multinode-867906-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.10s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-867906 node start m03 --alsologtostderr: (9.836585594s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (112.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-867906
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-867906
E0103 19:21:50.908630   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-867906: (24.816438415s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-867906 --wait=true -v=8 --alsologtostderr
E0103 19:23:12.544632   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/functional-436252/client.crt: no such file or directory
E0103 19:23:13.954811   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-867906 --wait=true -v=8 --alsologtostderr: (1m27.806791414s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-867906
--- PASS: TestMultiNode/serial/RestartKeepsNodes (112.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-867906 node delete m03: (4.094155605s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-867906 stop: (23.654060536s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-867906 status: exit status 7 (97.090057ms)

                                                
                                                
-- stdout --
	multinode-867906
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-867906-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-867906 status --alsologtostderr: exit status 7 (88.344032ms)

                                                
                                                
-- stdout --
	multinode-867906
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-867906-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 19:24:01.366937  126125 out.go:296] Setting OutFile to fd 1 ...
	I0103 19:24:01.367067  126125 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:24:01.367078  126125 out.go:309] Setting ErrFile to fd 2...
	I0103 19:24:01.367085  126125 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:24:01.367289  126125 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-8915/.minikube/bin
	I0103 19:24:01.367467  126125 out.go:303] Setting JSON to false
	I0103 19:24:01.367497  126125 mustload.go:65] Loading cluster: multinode-867906
	I0103 19:24:01.367659  126125 notify.go:220] Checking for updates...
	I0103 19:24:01.368074  126125 config.go:182] Loaded profile config "multinode-867906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:24:01.368095  126125 status.go:255] checking status of multinode-867906 ...
	I0103 19:24:01.368536  126125 cli_runner.go:164] Run: docker container inspect multinode-867906 --format={{.State.Status}}
	I0103 19:24:01.384441  126125 status.go:330] multinode-867906 host status = "Stopped" (err=<nil>)
	I0103 19:24:01.384485  126125 status.go:343] host is not running, skipping remaining checks
	I0103 19:24:01.384497  126125 status.go:257] multinode-867906 status: &{Name:multinode-867906 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0103 19:24:01.384530  126125 status.go:255] checking status of multinode-867906-m02 ...
	I0103 19:24:01.384834  126125 cli_runner.go:164] Run: docker container inspect multinode-867906-m02 --format={{.State.Status}}
	I0103 19:24:01.400068  126125 status.go:330] multinode-867906-m02 host status = "Stopped" (err=<nil>)
	I0103 19:24:01.400087  126125 status.go:343] host is not running, skipping remaining checks
	I0103 19:24:01.400094  126125 status.go:257] multinode-867906-m02 status: &{Name:multinode-867906-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (80.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-867906 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-867906 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m19.86987798s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-867906 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (80.45s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-867906
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-867906-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-867906-m02 --driver=docker  --container-runtime=crio: exit status 14 (75.626808ms)

                                                
                                                
-- stdout --
	* [multinode-867906-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-8915/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-8915/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-867906-m02' is duplicated with machine name 'multinode-867906-m02' in profile 'multinode-867906'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-867906-m03 --driver=docker  --container-runtime=crio
E0103 19:25:41.654116   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-867906-m03 --driver=docker  --container-runtime=crio: (23.577830516s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-867906
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-867906: exit status 80 (270.379977ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-867906
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-867906-m03 already exists in multinode-867906-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-867906-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-867906-m03: (1.876574954s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.86s)

                                                
                                    
x
+
TestPreload (141.67s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-451891 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0103 19:26:50.908482   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-451891 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m10.847125657s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-451891 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-451891 image pull gcr.io/k8s-minikube/busybox: (2.580314346s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-451891
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-451891: (5.72030203s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-451891 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-451891 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m0.048080088s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-451891 image list
E0103 19:28:12.544404   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/functional-436252/client.crt: no such file or directory
helpers_test.go:175: Cleaning up "test-preload-451891" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-451891
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-451891: (2.266930602s)
--- PASS: TestPreload (141.67s)

                                                
                                    
x
+
TestScheduledStopUnix (100.81s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-619184 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-619184 --memory=2048 --driver=docker  --container-runtime=crio: (24.712436004s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-619184 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-619184 -n scheduled-stop-619184
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-619184 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-619184 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-619184 -n scheduled-stop-619184
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-619184
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-619184 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0103 19:29:35.591404   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/functional-436252/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-619184
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-619184: exit status 7 (74.997748ms)

                                                
                                                
-- stdout --
	scheduled-stop-619184
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-619184 -n scheduled-stop-619184
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-619184 -n scheduled-stop-619184: exit status 7 (74.328578ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-619184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-619184
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-619184: (4.682519813s)
--- PASS: TestScheduledStopUnix (100.81s)

                                                
                                    
x
+
TestInsufficientStorage (13.29s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-305833 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-305833 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.926166927s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9c2ee29a-1996-41f3-9256-33dfbd231ccf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-305833] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4d15d60f-6cf4-4015-9313-405c7fcf9048","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17885"}}
	{"specversion":"1.0","id":"adb3be4a-f423-4c47-addf-eac4eb86aed8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"caf3b536-0087-40ef-b957-81fa10c86a0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17885-8915/kubeconfig"}}
	{"specversion":"1.0","id":"c9fbe88f-74de-4d28-95bd-cb6ba4888ef0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-8915/.minikube"}}
	{"specversion":"1.0","id":"bddc7c87-844d-4ed9-9a77-052b04adbeec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1fff27c8-8251-4291-b241-dca11c73efd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3842b093-df3a-47d1-aacf-e67f38da44c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ac23e197-2b12-41b9-9230-d7e1082904d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"5a29fdca-8c41-438d-b064-9141794e2632","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"81e435b3-8a0f-4a62-9c2b-2926c892b6e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"549abdbc-b3b6-49bd-b301-d0c330710fd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-305833 in cluster insufficient-storage-305833","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"51696504-8549-48bb-b89d-d274edb9bd9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1703498848-17857 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"0b6718e2-b9f5-46e2-bae5-b6d9d36224d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"cf468ec0-1ab6-465d-a478-eb04a3f125e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-305833 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-305833 --output=json --layout=cluster: exit status 7 (264.497559ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-305833","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-305833","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 19:30:06.977282  147725 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-305833" does not appear in /home/jenkins/minikube-integration/17885-8915/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-305833 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-305833 --output=json --layout=cluster: exit status 7 (263.974855ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-305833","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-305833","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 19:30:07.242203  147813 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-305833" does not appear in /home/jenkins/minikube-integration/17885-8915/kubeconfig
	E0103 19:30:07.251415  147813 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/insufficient-storage-305833/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-305833" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-305833
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-305833: (1.839066454s)
--- PASS: TestInsufficientStorage (13.29s)

                                                
                                    
x
+
TestKubernetesUpgrade (358.82s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-463996 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-463996 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (57.154074957s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-463996
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-463996: (2.908647765s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-463996 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-463996 status --format={{.Host}}: exit status 7 (84.193936ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-463996 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-463996 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m31.042951507s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-463996 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-463996 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-463996 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (96.864755ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-463996] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-8915/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-8915/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-463996
	    minikube start -p kubernetes-upgrade-463996 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4639962 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-463996 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-463996 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0103 19:36:50.908744   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-463996 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.735165438s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-463996" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-463996
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-463996: (4.721363936s)
--- PASS: TestKubernetesUpgrade (358.82s)

                                                
                                    
x
+
TestMissingContainerUpgrade (134.43s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.9.0.2011798292.exe start -p missing-upgrade-123933 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.9.0.2011798292.exe start -p missing-upgrade-123933 --memory=2200 --driver=docker  --container-runtime=crio: (1m5.554500776s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-123933
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-123933
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-123933 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-123933 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m4.076764149s)
helpers_test.go:175: Cleaning up "missing-upgrade-123933" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-123933
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-123933: (2.066291889s)
--- PASS: TestMissingContainerUpgrade (134.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-246069 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-246069 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (91.417178ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-246069] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-8915/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-8915/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (33.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-246069 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-246069 --driver=docker  --container-runtime=crio: (33.043416717s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-246069 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (33.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-246069 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-246069 --no-kubernetes --driver=docker  --container-runtime=crio: (4.389180586s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-246069 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-246069 status -o json: exit status 2 (492.039049ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-246069","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-246069
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-246069: (2.463581682s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-246069 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-246069 --no-kubernetes --driver=docker  --container-runtime=crio: (5.538870895s)
--- PASS: TestNoKubernetes/serial/Start (5.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-246069 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-246069 "sudo systemctl is-active --quiet service kubelet": exit status 1 (295.815264ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-246069
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-246069: (1.241464094s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-246069 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-246069 --driver=docker  --container-runtime=crio: (8.387442976s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-246069 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-246069 "sudo systemctl is-active --quiet service kubelet": exit status 1 (283.249061ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-279760
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-254718 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-254718 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (164.439999ms)

                                                
                                                
-- stdout --
	* [false-254718] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-8915/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-8915/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 19:32:04.791942  180354 out.go:296] Setting OutFile to fd 1 ...
	I0103 19:32:04.792052  180354 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:32:04.792061  180354 out.go:309] Setting ErrFile to fd 2...
	I0103 19:32:04.792066  180354 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:32:04.792269  180354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-8915/.minikube/bin
	I0103 19:32:04.792871  180354 out.go:303] Setting JSON to false
	I0103 19:32:04.794506  180354 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4471,"bootTime":1704305854,"procs":666,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0103 19:32:04.794599  180354 start.go:138] virtualization: kvm guest
	I0103 19:32:04.796954  180354 out.go:177] * [false-254718] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0103 19:32:04.799211  180354 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 19:32:04.800625  180354 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 19:32:04.799255  180354 notify.go:220] Checking for updates...
	I0103 19:32:04.802175  180354 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-8915/kubeconfig
	I0103 19:32:04.803464  180354 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-8915/.minikube
	I0103 19:32:04.804862  180354 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0103 19:32:04.806303  180354 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 19:32:04.808271  180354 config.go:182] Loaded profile config "kubernetes-upgrade-463996": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0103 19:32:04.808403  180354 config.go:182] Loaded profile config "missing-upgrade-123933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0103 19:32:04.808480  180354 config.go:182] Loaded profile config "running-upgrade-972574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0103 19:32:04.808554  180354 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 19:32:04.830588  180354 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 19:32:04.830704  180354 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 19:32:04.886964  180354 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:true NGoroutines:80 SystemTime:2024-01-03 19:32:04.877411858 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0103 19:32:04.887069  180354 docker.go:295] overlay module found
	I0103 19:32:04.888956  180354 out.go:177] * Using the docker driver based on user configuration
	I0103 19:32:04.890251  180354 start.go:298] selected driver: docker
	I0103 19:32:04.890267  180354 start.go:902] validating driver "docker" against <nil>
	I0103 19:32:04.890276  180354 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 19:32:04.892366  180354 out.go:177] 
	W0103 19:32:04.893585  180354 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0103 19:32:04.894880  180354 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-254718 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-254718

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-254718

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-254718

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-254718

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-254718

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-254718

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-254718

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-254718

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-254718

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-254718

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-254718

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-254718" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-254718" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 03 Jan 2024 19:31:47 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-463996
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt
server: https://127.0.0.1:32936
name: missing-upgrade-123933
contexts:
- context:
cluster: kubernetes-upgrade-463996
extensions:
- extension:
last-update: Wed, 03 Jan 2024 19:31:47 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: kubernetes-upgrade-463996
name: kubernetes-upgrade-463996
- context:
cluster: missing-upgrade-123933
user: missing-upgrade-123933
name: missing-upgrade-123933
current-context: kubernetes-upgrade-463996
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-463996
user:
client-certificate: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/kubernetes-upgrade-463996/client.crt
client-key: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/kubernetes-upgrade-463996/client.key
- name: missing-upgrade-123933
user:
client-certificate: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/missing-upgrade-123933/client.crt
client-key: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/missing-upgrade-123933/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-254718

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254718"

                                                
                                                
----------------------- debugLogs end: false-254718 [took: 4.482917228s] --------------------------------
helpers_test.go:175: Cleaning up "false-254718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-254718
--- PASS: TestNetworkPlugins/group/false (4.82s)

                                                
                                    
x
+
TestPause/serial/Start (44.91s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-305237 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-305237 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (44.914283425s)
--- PASS: TestPause/serial/Start (44.91s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (41.65s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-305237 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-305237 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.633483127s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (41.65s)

                                                
                                    
x
+
TestPause/serial/Pause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-305237 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.71s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-305237 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-305237 --output=json --layout=cluster: exit status 2 (321.975287ms)

                                                
                                                
-- stdout --
	{"Name":"pause-305237","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-305237","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-305237 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.76s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-305237 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.76s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.71s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-305237 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-305237 --alsologtostderr -v=5: (2.709624766s)
--- PASS: TestPause/serial/DeletePaused (2.71s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.2s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.148440877s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-305237
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-305237: exit status 1 (18.937049ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-305237: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (3.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (120.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-706388 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-706388 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m0.978666956s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (120.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-412346 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-412346 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (40.889746464s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (40.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-412346 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e75045e6-fdd5-4e92-bf19-cd62fe2e9ee6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e75045e6-fdd5-4e92-bf19-cd62fe2e9ee6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003887824s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-412346 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-412346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-412346 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-412346 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-412346 --alsologtostderr -v=3: (12.022322452s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-412346 -n embed-certs-412346
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-412346 -n embed-certs-412346: exit status 7 (77.307962ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-412346 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (334.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-412346 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0103 19:35:41.653875   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-412346 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m34.159081511s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-412346 -n embed-certs-412346
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (334.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-706388 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [34203642-2aae-4b52-8f43-73619a7254c7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [34203642-2aae-4b52-8f43-73619a7254c7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.002805806s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-706388 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-706388 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-706388 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-706388 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-706388 --alsologtostderr -v=3: (11.999879277s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-706388 -n old-k8s-version-706388
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-706388 -n old-k8s-version-706388: exit status 7 (79.056352ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-706388 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (428.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-706388 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-706388 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m7.916937205s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-706388 -n old-k8s-version-706388
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (428.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (54.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-670515 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-670515 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (54.867712927s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (54.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-799451 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-799451 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (43.965305917s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-670515 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [381cdd19-827b-4b4d-af01-23169dcd5aaa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [381cdd19-827b-4b4d-af01-23169dcd5aaa] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003940356s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-670515 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-670515 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-670515 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-670515 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-670515 --alsologtostderr -v=3: (11.966250104s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-799451 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a40684a0-6afa-4e00-9f99-0758ee4cc468] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a40684a0-6afa-4e00-9f99-0758ee4cc468] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003379774s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-799451 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-670515 -n no-preload-670515
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-670515 -n no-preload-670515: exit status 7 (104.321547ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-670515 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (343.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-670515 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-670515 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (5m42.735488643s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-670515 -n no-preload-670515
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (343.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-799451 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-799451 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-799451 --alsologtostderr -v=3
E0103 19:38:12.543786   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/functional-436252/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-799451 --alsologtostderr -v=3: (11.992671499s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-799451 -n default-k8s-diff-port-799451
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-799451 -n default-k8s-diff-port-799451: exit status 7 (76.402462ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-799451 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (335.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-799451 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0103 19:39:53.955069   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-799451 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m34.825031298s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-799451 -n default-k8s-diff-port-799451
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (335.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-d8cc5" [d49b0be1-0701-4be5-993b-942610cc6088] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0103 19:40:41.653788   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-d8cc5" [d49b0be1-0701-4be5-993b-942610cc6088] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.003827348s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-d8cc5" [d49b0be1-0701-4be5-993b-942610cc6088] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003484552s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-412346 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-412346 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-412346 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-412346 -n embed-certs-412346
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-412346 -n embed-certs-412346: exit status 2 (297.206293ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-412346 -n embed-certs-412346
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-412346 -n embed-certs-412346: exit status 2 (297.071606ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-412346 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-412346 -n embed-certs-412346
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-412346 -n embed-certs-412346
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-836192 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-836192 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (35.187426249s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-836192 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-836192 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-836192 --alsologtostderr -v=3: (1.218296811s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-836192 -n newest-cni-836192
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-836192 -n newest-cni-836192: exit status 7 (76.943006ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-836192 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-836192 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0103 19:41:50.908432   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-836192 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (25.858399433s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-836192 -n newest-cni-836192
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-836192 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-836192 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-836192 -n newest-cni-836192
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-836192 -n newest-cni-836192: exit status 2 (310.055953ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-836192 -n newest-cni-836192
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-836192 -n newest-cni-836192: exit status 2 (294.737528ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-836192 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-836192 -n newest-cni-836192
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-836192 -n newest-cni-836192
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-254718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-254718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (41.312491225s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-254718 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-254718 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8d4wl" [4ec2fec3-dffc-4218-8617-9c61d63dcc48] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8d4wl" [4ec2fec3-dffc-4218-8617-9c61d63dcc48] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003763893s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-254718 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-254718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-254718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (72.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-254718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-254718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m12.858791936s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (72.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-gpqpg" [0b85d755-b10e-4860-8ea2-ea5c601f3a1d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00348467s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-gpqpg" [0b85d755-b10e-4860-8ea2-ea5c601f3a1d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003558433s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-706388 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-706388 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-706388 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-706388 -n old-k8s-version-706388
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-706388 -n old-k8s-version-706388: exit status 2 (380.977925ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-706388 -n old-k8s-version-706388
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-706388 -n old-k8s-version-706388: exit status 2 (360.464549ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-706388 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-706388 -n old-k8s-version-706388
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-706388 -n old-k8s-version-706388
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (17.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jvhfc" [92629b0b-8984-4d61-b5c9-73fbb14c7434] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jvhfc" [92629b0b-8984-4d61-b5c9-73fbb14c7434] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.004634429s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (17.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (70.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-254718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-254718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m10.103989052s)
--- PASS: TestNetworkPlugins/group/calico/Start (70.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-f2969" [82cdef5d-6ef4-4413-ba53-80d0b58eb891] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-f2969" [82cdef5d-6ef4-4413-ba53-80d0b58eb891] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004306079s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jvhfc" [92629b0b-8984-4d61-b5c9-73fbb14c7434] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004296974s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-670515 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-f2969" [82cdef5d-6ef4-4413-ba53-80d0b58eb891] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004451622s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-799451 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-670515 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-670515 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-670515 -n no-preload-670515
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-670515 -n no-preload-670515: exit status 2 (311.765509ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-670515 -n no-preload-670515
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-670515 -n no-preload-670515: exit status 2 (292.695555ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-670515 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-670515 -n no-preload-670515
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-670515 -n no-preload-670515
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-799451 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-799451 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-799451 -n default-k8s-diff-port-799451
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-799451 -n default-k8s-diff-port-799451: exit status 2 (381.20571ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-799451 -n default-k8s-diff-port-799451
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-799451 -n default-k8s-diff-port-799451: exit status 2 (333.459209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-799451 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-799451 -n default-k8s-diff-port-799451
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-799451 -n default-k8s-diff-port-799451
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-254718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-254718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m4.067922114s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-254718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-254718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m18.9670602s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-qrwqd" [6f039f97-e35c-46a7-950c-7b351ebbc751] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00447578s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-254718 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-254718 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vqtx2" [11a60895-f26a-4229-be1a-10ea489967b2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vqtx2" [11a60895-f26a-4229-be1a-10ea489967b2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004545111s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-254718 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-254718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-254718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rltkn" [75a6868c-6ff9-41a6-b3cc-b7f99871ff04] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006329531s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-254718 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-254718 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-q9llp" [eeb5f223-e578-40c7-a2f5-6bdf913c4e07] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-q9llp" [eeb5f223-e578-40c7-a2f5-6bdf913c4e07] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003473367s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-254718 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-254718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-254718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-254718 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-254718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-254718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (59.754678882s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-254718 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-22xzl" [bc86a7bd-4859-43d6-82f4-61c8e167761e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-22xzl" [bc86a7bd-4859-43d6-82f4-61c8e167761e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004509029s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-254718 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-254718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-254718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (77.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-254718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-254718 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m17.811021696s)
--- PASS: TestNetworkPlugins/group/bridge/Start (77.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-254718 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-254718 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jwvmc" [b358dfc3-a225-49b2-87c4-f688b09a4885] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jwvmc" [b358dfc3-a225-49b2-87c4-f688b09a4885] Running
E0103 19:45:41.653470   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.00381866s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-254718 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-254718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-254718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gfhwd" [3f47f1bf-bf82-454c-b6b5-29f0d10359fa] Running
E0103 19:46:15.189728   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/old-k8s-version-706388/client.crt: no such file or directory
E0103 19:46:15.592525   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/functional-436252/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004163427s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-254718 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-254718 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2jj2j" [11e616ff-a1ed-48c8-95ef-247a376990fc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2jj2j" [11e616ff-a1ed-48c8-95ef-247a376990fc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003221772s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-254718 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-254718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-254718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-254718 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-254718 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-v5g4s" [7b466755-baf5-4307-af95-4a718248ec20] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0103 19:46:50.907729   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/addons-173367/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-v5g4s" [7b466755-baf5-4307-af95-4a718248ec20] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003587339s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-254718 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-254718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-254718 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    

Test skip (27/316)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-319958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-319958
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-254718 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-254718

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-254718

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-254718

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-254718

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-254718

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-254718

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-254718

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-254718

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-254718

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-254718

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-254718

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-254718" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-254718" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 03 Jan 2024 19:31:47 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-463996
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt
server: https://127.0.0.1:32936
name: missing-upgrade-123933
contexts:
- context:
cluster: kubernetes-upgrade-463996
extensions:
- extension:
last-update: Wed, 03 Jan 2024 19:31:47 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: kubernetes-upgrade-463996
name: kubernetes-upgrade-463996
- context:
cluster: missing-upgrade-123933
user: missing-upgrade-123933
name: missing-upgrade-123933
current-context: kubernetes-upgrade-463996
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-463996
user:
client-certificate: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/kubernetes-upgrade-463996/client.crt
client-key: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/kubernetes-upgrade-463996/client.key
- name: missing-upgrade-123933
user:
client-certificate: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/missing-upgrade-123933/client.crt
client-key: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/missing-upgrade-123933/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-254718

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254718"

                                                
                                                
----------------------- debugLogs end: kubenet-254718 [took: 3.535196557s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-254718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-254718
E0103 19:32:04.700592   15670 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/ingress-addon-legacy-547465/client.crt: no such file or directory
--- SKIP: TestNetworkPlugins/group/kubenet (3.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-254718 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-254718

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-254718

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-254718

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-254718

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-254718

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-254718

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-254718

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-254718

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-254718

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-254718

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-254718

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-254718" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-254718

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-254718

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-254718

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-254718

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-254718" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-254718" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17885-8915/.minikube/ca.crt
server: https://127.0.0.1:32936
name: missing-upgrade-123933
contexts:
- context:
cluster: missing-upgrade-123933
user: missing-upgrade-123933
name: missing-upgrade-123933
current-context: ""
kind: Config
preferences: {}
users:
- name: missing-upgrade-123933
user:
client-certificate: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/missing-upgrade-123933/client.crt
client-key: /home/jenkins/minikube-integration/17885-8915/.minikube/profiles/missing-upgrade-123933/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-254718

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-254718" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254718"

                                                
                                                
----------------------- debugLogs end: cilium-254718 [took: 3.816947606s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-254718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-254718
--- SKIP: TestNetworkPlugins/group/cilium (3.99s)

                                                
                                    
Copied to clipboard